Measuring Results and Establishing Value

At the end of the day, how do you know you’ve been successful? What value are you providing to your organization or clients? In today’s climate of budget cuts and lay-offs, it’s particularly important that you establish clear measures of success before embarking upon any venture.

Fast Company magazine (October 2005) calls this the “age of accountability.” The magazine reports that 78% of CEOs at the worst-performing 20% of companies in the S&P 500 have been replaced within the past five years. Clearly, from CEO right on down to middle manager, no one is immune to the need to demonstrate value.

So how do you show you’re moving the needle?

Establishing Effective Metrics
Effective metrics take into account the objectives of the initiative. Who are you trying to reach? What do you want them to do? How will you know that your initiative’s goals have been accomplished? These measures should be incorporated into any proposal or agreement. In fact, a good rule of thumb is that if you can’t measure it, you shouldn’t do it.

Measurement can sound daunting. People struggle with measuring the results of individual initiatives for a multitude of reasons. For example, they often view measurement as overly complicated, fear they won’t be successful at meeting goals or assume measurement will be costly.

While measurement can be complex, it doesn’t have to be. Here’s our suggestion for beginning the process of establishing measures in any situation: When embarking upon a project, ask your boss or your client or whoever is ultimately engaging your services, “How will we know we’ve been successful?” Note the emphasis on “we.”

In order to design effective measurement criteria, it’s critical that everyone involved in the initiative – whether actually executing the work or signing the check that pays for it – has a clear idea of what success looks like. The implementer may have a very different idea of victory than the person ultimately responsible for the initiative. It’s important to achieve consensus before getting underway.

Metrics can vary between initiatives and don’t necessarily have to be quantitative. It’s not always possible or desirable to have a consistent formula for measuring every initiative. For instance, some of our public relations colleagues talk about results in terms of number of media impressions generated. These may be valid components of a public relations metrics program, but measurement shouldn’t stop there automatically.

A public relations measurement program that includes only the number of impressions or clips often stops short of addressing real value, much like a meeting organizer who measures success based solely on whether participants rated the day a “1” or a “5.” At the end of the day, it’s what meeting participants do with what they learned rather than how much they enjoyed their experience that really matters.

Outputs vs. Outcomes
Effective measurement should go beyond evaluating the output of a program, such as number of media impressions or whether meeting attendees thought the sessions were the right lengths, to examining the program outcome output, such as a change in behavior or opinion. A comprehensive metrics program should take into account the overall objectives of the initiative – whether key audiences were impacted, leads generated or resources used more efficiently. After all, sometimes lack of impressions is the desired outcome, such as when a crisis occurs and it doesn’t get overplayed in the media or an organization is left out of a negative industry article.

It’s also important to manage expectations regarding anticipated results. Some organizations want to associate every activity with increased sales or stock price. These outcomes are obviously mission critical. However, it’s not always possible to establish a direct link between all elements and these ultimate outcomes. Synergies develop between activities that make them stronger together than on their own. There’s also a domino effect – if you get individual components moving, they ultimately lead to the outcome, but there are important in-between steps that need to occur as well.

A good example of this is McDonald’s stock hitting a 52-week high, after reports that sales increased at U.S. stores for the fifth consecutive month. CEO James Cantalupo was quoted in Crain’s Chicago Business as attributing the sales increase to longer hours, efforts to improve customer service and new products. The fact is that the increase may be due to any one of these areas or all of them. Most likely, it’s some combination that resulted in the ultimate outcomes and McDonald’s isn’t certain what the exact formula is that resulted in the turnaround.

Fear of Failure
What if you’re not successful in meeting your goals? Yes, associating your work with outcomes creates vulnerability. But without measurement, you will never know the extent of your accomplishments or the opportunities that exist for improvement. Ultimately, you’ll be much more vulnerable if you don’t evaluate your results because you will be subject to the mercy of popularity contests when budget cuts roll around. If you do have a problem, the sooner you diagnose it, the sooner you can start to set things right.

In designing measurement programs, we recommend incorporating diagnostic elements as much as possible. That means we go beyond determining whether a program is successful to uncovering why it worked or didn’t work. We can then apply our findings to the program being evaluated as well as to other endeavors. For example, in our employee surveys, we use advanced statistics to identify the key drivers of a desired outcome. This enables us to provide our clients with two kinds of measurement: First, whether they are meeting their goals; and second, where they should focus resources to get the most bang for their buck (or euro or yen).

Determining Measures of Success
Following are measures of success in which we have collaborated with clients during the past ten years. Not all measures are quantitative or particularly costly – qualitative measures can also be effective and can often be tailored to fit just about any budget size. The key, as recommended earlier in this article, is to agree on the measures that will be applied.

• Within a McDonald’s department, we gauged improvement in the effectiveness of internal communications through progress noted on key measures of an employee communications survey. For example, after implementation of our recommendations, the number of employees who agreed they had the information they needed to do their jobs well increased from 47% to 79% (we actually exceeded our goal of 72%).

• When we developed a corporate messaging framework for Baxter International, a critical evaluative component was the achievement of consensus across multiple business units on messages that resonated among key external audiences. When the messages had been tested and refined through global qualitative research among medical professionals and received widespread internal approval from Baxter’s senior management, we knew we’d hit the mark.

• At Exelon Corporation, measures of success included our ability to provide concrete recommendations regarding streamlining and improving the efficiency of communications vehicles. Through a baseline communications audit, we recommended changes in the utilization of communications vehicles that resulted in savings in the six figures. We then conducted a follow-up audit that quantified the Exelon corporate communications department’s resulting increase in effectiveness. The bottom line: We helped Exelon save money while improving communications results.

• For another company, we measured increases in officers’ abilities to communicate with subordinates following individualized coaching by surveying their employees before and after the training sessions. Five out of eight officers had significantly improved their communications skills, according to their employees.

• We evaluated our success in designing an internal communications plan and staffing function for another organization by whether the recommended system was self-sustaining (didn’t require the continued use of an outside consultant) and allowed the company president to use her time more effectively.

• For a major retail chain that wanted to appeal to the challenging teen market, we evaluated our ability to identify consistent and compelling messages to use in communications with teenagers of various cultures. Our focus groups helped identify three messages that allowed the chain to confidently “speak in one voice” to teens via the web site and other vehicles.

These measures helped us gauge not only our clients’ successes, but ours as a company as well.

Jenny Schade is president of JRS Consulting, Inc., a firm that helps
organizations build leading brands and efficiently attract and retain
employees and customers. Subscribe to the free JRS newsletter on www.jrsconsulting.net.

www.jrsconsulting.net
© JRS Consulting, Inc. 2008