PRINTER FRIENDLY VERSION

When Measurement Goes Bad:
Avoiding Research Pitfalls


Measurement gurus are generally very articulate about the perils of failing to conduct research. However, there is one thing worse than not conducting any research at all, and that is conducting bad research, as evidenced by the faulty television network exit polls carried out during the recent U.S. Presidential Election.

As those exit polls demonstrated, inaccurate research can lead to either a sense of false confidence in an outcome or too easily giving up on a potential solution. (We will leave it up to you, gentle reader, regarding how you interpreted exit poll predictions that John Kerry would be the next U.S. president.)

Poorly designed or interpreted research is a slippery slope. Once you start down, it can be difficult to recover without embarrassing admissions or reversals. In this article, we share two common pitfalls of research, suggestions on how to avoid them and several case study examples, based on our 15 years of conducting market research for organizations and their agencies.

PITFALL #1: MEASURING THE WRONG THING
A public relations agency president recently confided, "We just can't seem to get measurement in our programs. We always include it in new business presentations because clients say they want it, but then they seem so bored by that section of the pitch and never want to pay for that option."

This anecdote clearly demonstrates an invaluable tenet of measurement: If your key stakeholders aren't interested in the measurement you are proposing or conducting, then you are most likely measuring the wrong thing. If your stakeholders are yawning or moaning about the budget when you talk about measurement, it's time to have a heart-to-heart conversation about what they are really trying to accomplish with the initiative.

If you think of measurement as "an option," ask yourself this question: Is success optional? In other words, are you comfortable at the end of an initiative if neither you nor your client (internal or external) knows whether you have been successful?

Ultimately, you're making yourself extremely vulnerable if you don't incorporate evaluation into your work because you will be subject to the mercy of popularity contests when budget cuts roll around. In fact, a good rule of thumb is that if you can't measure it, you shouldn't do it. Don't embark upon a project with measurement being "optional." While you might include a selection of evaluation techniques, with one requiring additional outlay of expenditures, it's in your own best interest to build a basic level of measurement into your program.

Establishing Measures of Success
Here's our suggestion for beginning the process of establishing measures of success in any situation: When embarking upon a project, ask your boss or your client or whoever is ultimately engaging your services, "How will we know we've been successful?" Note the emphasis on "we."

In order to design effective measurement criteria, it's critical that everyone involved in the initiative — whether actually executing the work or signing the check that pays for it — has a clear idea of what success looks like. The implementer may have a very different idea of victory than the person ultimately responsible for the initiative. It's important to achieve consensus before getting underway.

While measurement can be complex, it doesn't have to be. Metrics can vary between initiatives and don't necessarily have to be quantitative. By asking the right consulting questions before even beginning a project, you can successfully incorporate measures of success into every initiative that guarantee you will be on the path to success and don't have to cost an arm and a leg.

Case Study
Here's a very simple example of how a candid up-front discussion led to our delighting a client right from the start of an engagement due to the establishment of a very basic measure of success.

When an organization retained us to facilitate a Director's Meeting, we met in advance to review his meeting needs and collaborate on an agenda. We asked two key questions helpful for any initiative, "How will we know we've been successful?" and "What would constitute a complete failure?" Our client explained that the success of the meeting was largely dependent on all of the board members being present for the meeting. Unfortunately, meeting attendance was historically low and interfered with the group's work.

Our work began immediately, as we collaborated on how this client could encourage attendance at the upcoming session. We discussed what he had been doing to invite participants and recommended strategies for increasing attendance. Fast-forward to the meeting day: For the first time, all directors were present and actively participating, leading to a very productive meeting. The result was a client thrilled with our work before the actual meeting even began!

Obviously, the up-front conversation about how our client defined success for this initiative was invaluable, as it led to our understanding exactly what this client needed from us. Without our up-front questions and definition of success for this initiative, we might have failed in the client's eyes before the actual meeting facilitation even began.

PITFALL #2: TALKING GIBBERISH AND JARGON
I have a confession to make — I wasn't always such a gung-ho research person. In fact, my earliest encounters with research when I worked in public relations firms 20 years ago were actually a love/hate sort of experience. I loved how much more effective our work became when the research taught us about our target audiences. But I hated the gibberish and jargon we often received from the research companies giving us our reports. In fact, I hated that gibberish so much that I learned to do research myself because I wanted to translate that alien research language into real information that we could act upon.

Beyond all of the numbers, research should tell you a story in a way that you clearly understand and that you can use to improve your situation. As one of our clients at Kraft Foods once articulated, "When we get the research results back, I want to feel in my gut that this is the right thing to do."

His comment leads right into our second invaluable measurement tenet: Never get so bogged down in data that you can't hear your own gut feelings. Charts and graphs are helpful visual aids, but at the end of the day, research should clearly convey the current situation and help you make smart decisions about how to proceed.

For example, we use advanced statistics in our communications surveys of employees to identify the key drivers of effective communications. By key drivers, we mean the strategies and tactics that are most likely to produce a desired outcome, i.e. employees feeling informed or staff supporting a merger or reorganization.

This process allows us to advise our clients where they should focus resources in order to really move the needle on achieving better business results. The pay-off for clients is more effective communications, significant cost savings and quantified measurement of results.

Case Studies
We identified supervisor communications as a key driver of employees feeling informed within a McDonald's Corporation department. We also noted that particular areas within the department were lagging in supervisor communications skills and worked with our client to ensure the appropriate supervisors received communications training. As a result, supervisor communications quality improved, and consequently, the number of employees who reported receiving the information they needed to do their jobs well increased from 47% to 79% over two years. The value that employees placed upon internal communications also multiplied dramatically, leading to an expanded department in great demand.

Findings from our communications audit of Exelon Corporation employees helped the corporation realign its communications vehicles — leading to savings in the six figures — and provided clear direction for how the Corporate Communications department could help employees to feel better informed. Eighteen months later, we conducted a follow-up audit that quantified the Exelon Corporate Communications department's resulting increase in effectiveness.

Eliminating the "Option" of Measurement
It's in your best interest to include measurement in every initiative. Not all measures are quantitative or particularly costly — qualitative measures can also be effective and can often be tailored to fit just about any budget size. The key, as recommended earlier in this article, is to agree on the measures that will be applied.

By the way, one of the highlights of my career occurred during a presentation I made to 50 communications professionals at Exelon Corporation regarding our communications audit process. Towards the end of the presentation, a communicator raised her hand and asked, "You mean you do research we can really use?" With that question, I knew I'd met our own measure of success.

NEXT ARTICLE

Jenny Schade is president of JRS Consulting, Inc., a firm that helps organizations build leading brands and efficiently attract and retain employees and customers. Subscribe to the free JRS newsletter on www.jrsconsulting.net/newsletter.html

© JRS Consulting, Inc. 2007