Measuring the Unmeasurable

Several years ago, I was given the challenge of facilitating part a company through a large scale restructuring initiative. One of the main reasons that our firm was selected had to do with our considerable expertise in the area of performance diagnostics- a fancy way of saying that we were pretty good at “sizing up” hidden value (cost savings or service level improvements) that could be released through various process improvement initiatives.

As was the practice at our firm, the principal consultants on the project each took part of the business and led their respective team(s) through a process of defining appropriate measurements, baselining their performance, benchmarking them against “best in breed” competitors, analyzing gaps, assessing best practices, and building improvement plans. For consultants, this drill is pretty routine.

Well, the routine was about to change, and did change for me the minute I stepped foot off the airplane to start the engagement. You see, while all of the other consultants were given typical business functions as ‘their team’ to facilitate, your’s truly was given a few real winners- Internal Auditing, Risk Management, and Corporate Planning. I’m rarely one to whine about the cards I’m dealt, but this was a little nuts. How would I even begin to define measures for these functions, not to mention the later stages of baselining, benchmarking, and gap analysis? Someone either had a lot a faith in me, or was setting me up for something nasty.

Ironically, it turned out to be one of my favorite assignments. While the team I led was about as excited as I was, we all decided to approach it as a challenge…a chance to break some new ground.

While there are probably as many opinions about how to measure these types of functions(functions that are distanced from the end customer, with workload that is largely discretionary, and few if any tangible “widgets” produced), we decided to focus on what we eventually referred to as the three C’s: Customers, Competencies, and Critical Path.

Customers: For us, this was a logical place to start. Since these functions were quite removed from the end customer, we needed to define a surrogate customer of sorts- constituents whose business performance and survival was dependent on the business function being measured. The audit committee of the board, plant managers with P&L accountability, Business Unit Leaders, for example. From there, a service level agreement complete with performance standards served as the blueprint from which our baseline and benchmarks were then derived. In many cases, this was the first time the real customer had been identified- so it wasn’t uncommon to find a number of key functions that weren’t even on the customer’s radar screen. Some would fall off the board all together, as the customer would deem certain functions (often performed for years prior!)unnecessary going forward.

Competencies: Most functions like these require niche professional skill- truly unique disciplines. So the next logical place for us to go was to build a competency profile. For example, if the company wanted to have a “crack” internal audit staff (to serve the workload demanded by their customer of course!), a good yardstick of progress would be a performance measure that indicated the presence of specific competencies and expertise. These could be “soft” measures such as the presence of certain skills, or “hard” indicators like training hours or continuing education credits.

Critical Path: This was simply an indicator of progress against critical projects that emanated from the customer’s expectations. These indicators were different from the rating given by key customers and/or performance against negotiated service standards. Critical Path indicators dealt with very specific and key initiatives that were central to that function’s annual plan. While these may have been redundant in some cases, the team felt they were worth the added weight and specificity in the overall framework.

In the end, we had a balanced set of measurements that provided a clear picture relative contribution. Measures that could be clearly identified, counted, benchmarked, and used as a guidance system for gauging their long term performance.

Some would say it’s not perfect, and few performance management frameworks are. But it did serve the purpose of getting these functions involved in the restructure in a positive way, and initiated a significant improvement in the business contribution of these functions.

A far cry from – “you can’t measure us, we’re too different!”


Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: