Why Shared Services Organisations should benchmark with the entire market (and not JUST the most successful companies)

It’s possibly the best kept secret in the world of benchmarking (or more likely, that most of us happen to sit outside of the mathematician circles where the theory is well recognised). But  either way,  I confess to having spent most of my career completely oblivious to the issues with data selection bias in respect of benchmarking. 

It wasn’t until recent years, when I started spending time with data scientists and statisticians (releasing my inner geek) that the penny dropped. It’s actual a pretty simple concept: If we only ever compare ourselves to top performers, we run into the very real danger of ignoring large (and very relevant) datasets from the wider demographic that, together with the top performers data, offer us a more complete view of the universe.

We all know the traditional premise of why corporate benchmarking is so popular (especially in an industry like Shared Services & Outsourcing that was built on performance measurement and  lives and dies by its metrics and SLAs). It’s also basic human nature to aspire to be like those who perform better than we do. And so we religiously study the behaviours and results of the most successful companies in an attempt to join the ranking “Best in Class”/“World Class” or “Top Quadrant” (or whatever the latest marketing label that consultancies and analyst firms attach to the elite) in the hope that some of their genius rubs off. 

But actually, the statistical facts are telling quite a different story. One that shows that looking for magic formulas solely amongst the top performers is rather missing the point, because the top performers will only ever be able to tell us part of picture. If we want the full story, then we have to read the whole book (and not just its most exciting chapter). 

This isn’t new news. Analytics experts have been saying this since God was a boy, but somehow the message hasn’t filtered through to the mainstream, and by and large most of us still think benchmarking with the top of the quadrant is the only way to go.  Don’t get me wrong – it’s definitely ONE way to benchmark, and certainly creates valuable insight into the high bars of excellence being achieved by reputable brands that we could all aim for. The Data Science team here at SSON Analytics fully supports the viability of this route (see crowd-sourced best practice metrics from Top 20 most admired SSOs). But the bigger question is about whether a truer reflection of market benchmarking is to ALSO consider the wider set of data benchmarks that don’t make the selected peer group or elite squad. 

If you’re coming at this concept for the first time, then I’d strongly urge that you read some much better rhetoric than my  own (admittedly highly unqualified) take on this to help you form your view. My favourite lay explanation of the theory is Jerker Denrell’s HBR article Selection Bias and the Perils of Benchmarking. It’s an oldie but a goodie. 

Denrell presents the issue far more eloquently than I have:

 “Looking at successful firms can be remarkably misleading….Here’s the problem with learning by good example: We fall into the classic statistical trap of selection bias….  relying on data samples that are not representative of the whole population. The theoretically correct way to discover what makes a business successful is to look at both thriving and floundering companies”  

The question now is how to absorb this into our day-to-day benchmarking exercises.  SSON Analytics put its mind to examining whole population datasets that equip the user with a more comprehensive perspective of the market. Our Chief Data Scientist feels pretty strongly about this, and as such masterminded the Metric Intelligence Hub™ which includes a series of Finance and HR metric benchmarks across whole datasets across 121 countries and 22 vertical industries. This frontier efficiency approach examines data from all companies in the selected population (both thriving and surviving) eliminating the possibility of data selection bias. It’s a pretty neat idea and you don’t need add your own data to read the country-specific and vertical industry specific results. 

We could debate the merits of different benchmarks approaches until the cows come home, and I suspect there will never be a definitive answer (given the highly contentious nature of the topic). There will doubtless be plenty of people who have plenty to say about our suggestion that Top Performer Comparisons only will always give a limited view of the universe. 

I’d (politely)suggest those who dispute this theory may have a pretty good commercial reason to disagree that this wider approach to benchmarking has real legs.  And I expect they’ll (equally politely) suggest back that we have a commercial MO too.  Repeat forever…  

 

Stay in touch

Resource Center Archive

September 2024

August 2024

May 2024

March 2024

September 2023

July 2023

May 2023

February 2023

September 2022

August 2022

March 2022

November 2021

October 2021

September 2021

August 2021