Does Benchmarking really help? [the answer might surprise you – unless you are already accounting for sample bias!] | SSON Analytics

Benchmarking

How do you know if your performance is ‘where it should be’? And anyway, where should it be? Are you, perhaps, bumping along sub-par? And: what’s a valid benchmark?

Answering questions like these is not easy in isolation. That's why benchmarking is so popular. In fact, it's the one question practitioners ask us most often: Can you share Shared Services benchmark metrics?

Well, yes we can. Our SSON Analytics group has pulled together metrics from the top 20 most admired Shared Services Organizations around the world (you can access them here) so you can compare where you stand against the likes of Johnson & Johnson, Lufthansa, Vodafone, DHL, and Discovery Communications. If you want to know what these companies’ have achieved in terms of their big-picture metrics in Attrition, Payroll, Talent Management, Procure-to-Pay, Order-to-Cash, Record-to-Report, and more… this is where to go. 

It's useful and it gives you something to reference when you need extra ammunition for in-house pitches.

But that's not necessarily where you should stop (or maybe even start).

The truth is that to get an accurate picture of where your operation stands, benchmarking against ‘top performers’ is not, ultimately, the most useful approach. In fact, it can be downright dangerous and misleading, according to a classic article in the Harvard Business Review – highlighting discouraging gaps without offering you the context in which to understand their eligibility for your operations.

 

“Looking at successful firms can be remarkably misleading….Here’s the problem with learning by good example: We fall into the classic statistical trap of selection bias….  relying on data samples that are not representative of the whole population. The theoretically correct way to discover what makes a business successful is to look at both thriving and floundering companies.”  

Jerker Denrell, Harvard Business Review: Selection Bias and the Perils of Benchmarking

 

The problem about following so-called ‘leaders’ is that you fall into the statistical trap of selection bias, meaning a focus on samples that are not necessarily representative of the entire population that you are studying. A better way to benchmark, according to this article, is to study successful as well as unsuccessful examples. That's what will allow you to correctly identify differentiating qualities. 

Focusing on top performers will only ever give you a part of the picture, confirms Emma Beaumont, MD, SSON Analytics. To get the whole picture, she explains, “practitioners will need to read the entire book – not just the most exciting chapter.”

The, entire book, in this case, means SSON Analytics newly launched Metric Intelligence HubTM: 16 core metrics tracked live across 22 different industries and 121 different countries – thousands of data points incorporating the good, the bad, and downright ugly!

Why is this the right approach? Because it represents the next generation of benchmarking methodology, explains SSON Analytics Chief Data Scientist, Murphy Choy. “Oftentimes benchmarking takes the form of looking at averages in relative isolation, which translates into meaningless data. Let me give you an example: If we consider the world's strongest animals, all research points to the fact that it's the elephant or the whale. However – and this is key – if you consider strength relative to body weight, which is what counts, then the winner is the ant!”

Translating this into operational language: it only makes sense to evaluate metrics, or numbers, relative to a given input; and benchmark metrics need to relate to your specific environment in order to be meaningful, ie, take into account the unique variables of a specific country or a specific industry. In addition, averages are easily skewed by outliers, whereas focusing on the median gives you a more realistic idea of where the majority of your sample is.

“With the Metric Intelligence Hub, we have focused on Efficiency Frontiers of specific industries and countries, which avoids the pitfalls associated with company-based efficiency models,” says Murphy. “So our benchmark metrics have been validated to take into account the efficiency frontiers of your industry and your location – including human labor efficiency, labor market efficiency, technological advancement efficiency, etc. And we have quadruple-validated the data, so we know it's absolutely reliable. Our metrics provide, perhaps for the first time in this form, insights about the relative efficiency of companies within the context of their industry/country combination. You just cannot get any more relevant than that.”

 

• MIH benchmarks are derived from a proprietary algorithm that was developed in-house by SSON’s data analytics team.
• MIH was specifically designed to address the major gap in the market – i.e. that customers want to be able to see reliable metrics from a COUNTRY and INDUSTRY perspective.
• All metrics contained in MIH have been QUADRUPLE validated.
• 10 more metrics being added this week.
• A further 20 metrics being added before end of Q4

 

What's been missing in the Shared Services industry to date, Murphy says, is an interpretation of benchmarking data within the context of different operating environments, parameters of costs structure, market environments and operational models. These all differ enormously between companies.

“In truth, you may find that a 9$ cost per pay slip in industry X and country Y is actually more 'efficient' than an 8$ cost per pay slip in industry A and country B,” Murphy says. In other words, a company with unfavorable conditions and costs might be more efficient than a company operating under favorable conditions and costs. “But simple numbers in isolation won't tell you that,” he adds.

“It’s this that renders traditional metric benchmarking projects so unsatisfactory – and the Metric Intelligence Hub so exciting.”

 

* * * * * *

 

Find out more: SSON’s Metric Intelligence HubTM

Additional reading: Selection Bias and the Perils of Benchmarking, HBR, Jerker Denrell  

 

 

New MIH metrics being added next week:

1   Percentage of succession plans in place

2   Percentage of employees with formal training and development plan

3   Involuntary Attrition Rate

4   Attrition in Year 1 hires

5   Days to Close

6   Number of business days to resolve an invoice dispute case/ticket

7   Reporting cycle time External reporting

8   Number of Active General Ledger Accounts

9   Payroll Accuracy Rate Electronics Version

10 Payroll Accuracy Rate Manual Version

 

 

Tags:


SSON Analytics Data Tools Banner

User Area

Blog Archive

November 2018

October 2018

September 2018

August 2018

July 2018

June 2018

May 2018

March 2018

February 2018

January 2018

December 2017

November 2017

October 2017

September 2017

August 2017

May 2017

April 2017

February 2017

January 2017

December 2016

November 2016

October 2016

September 2016

August 2016

July 2016

June 2016

May 2016

April 2016

March 2016

Media Relations

For media enquiries relating to SSON Analytics or any of our data, please contact Sian Jenkins.

Email: info@dart-institute.com

Tel: +65 6722 9361

Most Commented Blogs

  • Most Commented Blogs
  • Past:
  • 1 day
  • 1 week
  • 1 month
  • 1 year