Peer Benchmarking Initiatives- Revisited

Over the past several weeks, I’ve gotten more than a few comments on my June 16th column regarding Peer Company sponsored initiatives. Given the volume of comments, which ranged from “spot on” to “downright delusional”, I thought it would be a good idea to take another look at this topic.

First, my acknowledgements to some of the more prominent programs that were mentioned in the article. My intention was not to endorse or condemn any of these programs specifically. In retrospect, my biggest failing may have been the fact that I may “painted them all with the same brush”. That was not my intention. My failure to discern between the “good” and the “not so good”, while perhaps frustrating to the sponsor companies, was however, deliberate. The programs I mentioned by name were mentioned only to help the reader identify with what I meant by “peer company sponsored initiatives”- NOT to endorse or condemn any one in particular. If it was interpreted as anything other than that, I do apologize to both the program sponsor and their participants.

For clarification, my main objective with the article was to offer a guide, or checklist, to help the reader discern what to LOOK for, and what to LOOKOUT for, in such programs. All programs, whether sponsored by peer companies, consultants, or independent facilitators have strengths, weaknesses, and risks. In fact, if you look through my past columns on benchmarking (see article index), you’ll find that I offer similar analysis and guides for other types of programs as well. There is no perfect solution. Again, my only objective was simply to help the reader discern what is best for them.

This year alone, the number of programs available (peer sponsored, and others) will nearly double. And because of this, many companies are facing the tough decision of which ones to participate in, and which ones to pass on. Unlike the early 90’s, resource limitations prohibit companies for participating in everything out there. And despite what may be advertised by these programs, none of these programs are “free” on any dimension.

Our research has shown the cost of data collection alone to be many times the “entry fee” of such programs (assuming there is one). Offering a way for the reader to pick and choose in an educated manner was, again, my main objective. My other objective, while a bit in the background, was to encourage the facilitators of such programs to respond to these risks, and to help mitigate them for their members.

For example, within a few weeks of publishing the June column, I learned that one of these programs now requires executive review and approval prior to admitting a new member- a tactic that clearly manages one of the key risks identified. As new programs come on line, I encourage the facilitators and members to remain conscious of these risks/ issues, and continue managing them as appropriate. Those that do will no doubt end up as the best “draws” for future members.

As a refresher, I recommended several key questions for companies considering peer sponsored benchmarking initiatives. These questions are just as relevant today, as they were in my original column. Some of the peer-sponsored programs manage them well, and some don’t. As I indicated in June, and barring any formal comparison of these initiatives (something we may elect to provide in the future), it’s up to the reader to decide what is right for them. These questions/ issues are offered only as a guide to help inform the prospective member.

1. Do you know the GENUINE REASON the company is offering such a program? (Is it documented, written down and accepted by executives of both the sponsor company and the participating company

2. What is the REAL COST of the program? Again, both to the sponsor company’s shareholder, and the member? What will the real cost of data collection be? Is it redundant with other programs? What is the sponsor company doing to mitigate this cost?

3. How do the program INTEGRATE/ interact with other similar initiatives? Do they compete against them (creating more redundancy), or will they partner on data and other types of integration?

4. Does the program require both managerial and executive level APPROVAL and OVERSIGHT? Are competitive concerns and/or antitrust issues known and mitigated.

5. How will they PROTECT your data? What assurances do you have?

6. How ROBUST is the membership? Are there enough companies in their membership to provide meaningful information for your particular demographic or type of infrastructure?


While this may not be an exhaustive list, it is a start. I invite any of you to add to this list via commenting on it, and I will publish any additions/ modifications in future columns.

To me, some of these issues are obviously more important than others. As I go through the above list, I believe the most important issue to be managed TODAY is that of redundancy and duplication of resources. And as more of these programs come to market, this is a cost that will get more and more visible. For example, it would be nice to see some significant effort to merge data requirements so that the member only needs to collect the information once. Some of this occurs today, but only in a very informal and ad hoc manner. More often than not, these programs end up “competing” with each other for very scarce resources. While each program may have something unique to offer, most require very similar means (data required) to arrive at their specific end point.

An analogy to consider- There is a reason why there is only one set of wires running down my street. Anything more would result in stranded investment and underutilization of assets. And in a world of scarce resources, that can’t be good for the buyer. Likewise with benchmarking initiatives. There is a lot of this type of redundancy and stranded investment in the world of benchmarking. Call it wishful thinking, but why not have some type of “data clearinghouse” that feeds data to each program based on what it needs, but eliminating the data collection duplication that is present today.

We must approach, and address each of the other risks in a similar manner. Only in this way can we have programs that truly offer a win-win for both the member and sponsor.

Again, if I offended any of the program facilitators or members by my words or tone in the June 16th column. I sincerely apologize. But I do, and will continue to strive to offer observations and feedback that strengthens our collective ability to better manage our performance. And to this end, you comments and feedback are both welcome and appreciated. Please direct any of your comments to


Author: Bob Champagne is Managing Partner of onVector Consulting Group, a privately held international management consulting organization specializing in the design and deployment of Performance Management tools, systems, and solutions. Bob has over 25 years of Performance Management experience and has consulted with hundreds of companies across numerous industries and geographies. Bob can be contacted at


  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: