How to Conduct Research That Moves the Business

"We may be lost, but we're making good time."

One of my favorite aunts was known for this saying, and I think of it every time I hear of a situation that risks sacrificing the benefits of conducting research in order to meet a procedural requirement or a belief in how something "should be done."

Yes, we're in the research business and we want to be scientific and precise. It's reassuring to follow procedures and generate lots of data. At the same time, it's critical to use common sense and make sure that our approach is going to give us information that improves business results versus a huge research report that sits on a shelf and gathers dust.

Based on our research over the past 15 years, we present three guidelines for conducting research that you can really use.

Define Success Clearly

Before you begin a study, clearly articulate your ultimate goals and how you will know you have achieved them.

Ultimate goals are defined as improved business results (i.e. increased engagement, better utilization of resources) rather than the typical stated objectives for a research project, which are often about methodology (i.e. surveying 500 employees or conducting eight focus groups). In other words, start by identifying your measures of success and then work backwards to determine your approach.

Clearly defined goals will help determine both methodology as well as the deliverable for the research. For example, we conduct communications audits for numerous organizations that wish to maximize the effectiveness of their communications. For some of these companies, a separate research report would be cumbersome and so our work moves right from the audit to the development of a very comprehensive communications plan that includes headlines from our findings.

For another company that was looking to create a communications function for a regional office, we defined success as having the function up and running by the end of our assignment. Our deliverables therefore included a communications plan, a job description, interviewing potential candidates and the hiring of a communications manager.

This doesn't mean we advocate scrapping reports entirely. Reports are quite useful, particularly for comparing progress against benchmarks or sharing findings in an executive summary for management. The point here is to ensure that your outcome meets your needs, rather than conforms to a standard approach.

Present Findings Clearly and Meaningfully

I once attended a presentation for a public relations account team conducted by two agency research directors, who passionately reviewed extensive charts and graphs and talked about means and regression analyses. Afterwards, I overheard one account executive whisper to another, "Wow — it was like they were making research love. I thought they were going to light up a cigarette afterward. I don't have a clue what they were talking about."

While enthusiasm is always a positive, it's vital to keep your audience in mind when you explain your findings. The more you can speak your client's language (i.e. "move more cases") and associate what you are doing with their bottom line and values, the greater your likelihood of success at helping them to improve their condition.

I often hear questions concerning the most "valid" way to present study results, i.e. the mean response or top two box percentages. The answer is, that depends...on what you are trying to do with the research, on the audience for your presentation and on how your audience will use the information.

In general, presenting results using language that the audience can best understand is going to feel most comfortable to the listener. Particularly for non-research audiences, hearing that the mean was 2.7 on a five-point scale can be difficult to interpret. It will probably be clearer to report that 78% of respondents "somewhat" or "strongly agree" about an initiative.

At times, it may be helpful to tailor how results are presented for various audiences, even for the same study. For a community organization client, we surveyed local citizens and asked them to rate the quality of a number of community programs offered by our client. When we met with our client contacts who were making decisions on which programs to keep and which to terminate in the coming year, we drew their attention to "top box" ratings — the highest rating on a five point scale — because we felt that this rating provided them with the clearest direction on the programs that were most and least successful.

However, when we presented an overall summary of our findings at a community meeting that included the local press, we focused on the "top two box" ratings because we knew that this approach was comparable to how other local organizations presented their survey results. Presenting the narrower, "top box" ratings to this broad audience would have portrayed the various programs as less successful and created a story that didn't exist.

Strategically Set Improvement Goals

I have yet to meet a client who isn't eager to improve upon their most recent employee or customer survey results. The desire to set an improvement goal is admirable, but it's important to carefully identify a reasonable goal.

For example, when it came time to survey the employees of one organization and compare findings to previous results, our client informed us that he and his team had set goals of 10 percentage point improvements on six key measures. These goals had been internally publicized and, in fact, the client and his entire team had incorporated them into their own performance goals that were tied to their compensation. When I asked how he had arrived at this improvement goal, he replied that it seemed like "a good, solid number."

Unfortunately, this increase was both overly ambitious and unnecessary. I explained to the client that while his ambition was commendable, just a four percentage point increase would be statistically significant and would therefore represent quite an improvement. Although he ultimately achieved the four percentage point increase and received my congratulations, he had set himself up for perceived failure within his own organization because of the large goal he had internally publicized. The following survey year, we collaborated on setting more realistic goals for improvement.

As part of measuring progress, clients often want to conduct a follow up employee or customer survey too soon after the original study. I've received requests for follow up research as soon as six months after a benchmark study. In these cases, I explain that while it's difficult to turn down business, I feel it would be unethical to conduct the follow-up study so quickly. I recommend usually waiting 18 months to conduct a follow-up study. That provides my clients with the opportunity to plan and implement an intervention that can then be accurately measured.

When deciding about the timing of follow-up research, ask yourselves the following two questions: What is different now? Has my audience had time to absorb this change? If you are satisfied with your responses to these questions, it may be time for that follow-up research.

Overall, observing research protocols and standards are important, but it is equally — if not more — critical to also use common sense when designing and conducting a study. This practical approach will ensure that the research will help move the business rather than just take up room on an office shelf.

Jenny Schade is president of JRS Consulting, Inc., a firm that helps organizations build leading brands and efficiently attract and motivate employees and customers. Subscribe to the free JRS newsletter on www.jrsconsulting.net.

www.jrsconsulting.net
© JRS Consulting, Inc. 2008