Does Benchmarking really help you? [the answer might surprise you – unless you are already accounting for sample bias!]
Posted by firstname.lastname@example.org on October 31, 2017
How do you know if your performance is ‘where it should be’? And anyway, where should it be? Are you, perhaps, bumping along sub-par? And: what’s a valid benchmark?
Answering questions like these is not easy in isolation. That's why benchmarking is so popular. In fact, it's the one question practitioners ask us most often: Can you share Shared Services benchmark metrics?
Well, yes we can. Our SSON Analytics group has pulled together metrics from the top 20 most admired Shared Services Organizations around the world (you can access them here) so you can compare where you stand against the likes of Johnson & Johnson, Lufthansa, Vodafone, DHL, and Discovery Communications. If you want to know what these companies’ have achieved in terms of their big-picture metrics in Attrition, Payroll, Talent Management, Procure-to-Pay, Order-to-Cash, Record-to-Report, and more… this is where to go.
It's useful and it gives you something to reference when you need extra ammunition for in-house pitches.
But that's not necessarily where you should stop (or maybe even start).
The truth is that to get an accurate picture of where your operation stands, benchmarking against ‘top performers’ is not, ultimately, the most useful approach. In fact, it can be downright dangerous and misleading, according to a classic article in the Harvard Business Review – highlighting discouraging gaps without offering you the context in which to understand their eligibility for your operations.
“Looking at successful firms can be remarkably misleading….Here’s the problem with learning by good example: We fall into the classic statistical trap of selection bias…. relying on data samples that are not representative of the whole population. The theoretically correct way to discover what makes a business successful is to look at both thriving and floundering companies.”
Jerker Denrell, Harvard Business Review: Selection Bias and the Perils of Benchmarking
The problem about following so-called ‘leaders’ is that you fall into the statistical trap of selection bias, meaning a focus on samples that are not necessarily representative of the entire population that you are studying. A better way to benchmark, according to this article, is to study successful as well as unsuccessful examples. That's what will allow you to correctly identify differentiating qualities.
Focusing on top performers will only ever give you a part of the picture, confirms Emma Beaumont, MD, SSON Analytics. To get the whole picture, she explains, “practitioners will need to read the entire book – not just the most exciting chapter.”
The, entire book, in this case, means SSON Analytics newly launched Metric Intelligence HubTM: 16 core metrics tracked live across 22 different industries and 121 different countries – thousands of data points incorporating the good, the bad, and downright ugly!
Why is this the right approach? Because it represents the next generation of benchmarking methodology, explains SSON Analytics Chief Data Scientist, Murphy Choy. “Oftentimes benchmarking takes the form of looking at averages in relative isolation, which translates into meaningless data. Let me give you an example: If we consider the world's strongest animals, all research points to the fact that it's the elephant or the whale. However – and this is key – if you consider strength relative to body weight, which is what counts, then the winner is the ant!”
Translating this into operational language: it only makes sense to evaluate metrics, or numbers, relative to a given input; and benchmark metrics need to relate to your specific environment in order to be meaningful, ie, take into account the unique variables of a specific country or a specific industry. In addition, averages are easily skewed by outliers, whereas focusing on the median gives you a more realistic idea of where the majority of your sample is.
“With the Metric Intelligence Hub, we have focused on Efficiency Frontiers of specific industries and countries, which avoids the pitfalls associated with company-based efficiency models,” says Murphy. “So our benchmark metrics have been validated to take into account the efficiency frontiers of your industry and your location – including human labor efficiency, labor market efficiency, technological advancement efficiency, etc. And we have quadruple-validated the data, so we know it's absolutely reliable. Our metrics provide, perhaps for the first time in this form, insights about the relative efficiency of companies within the context of their industry/country combination. You just cannot get any more relevant than that.”
MIH benchmarks are derived from a proprietary algorithm that was developed in-house by SSON’s data analytics team.
MIH was specifically designed to address the major gap in the market – i.e. that customers want to be able to see reliable metrics from a COUNTRY and INDUSTRY perspective.
All metrics contained in MIH have been QUADRUPLE validated.
10 more metrics being added this week.
A further 20 metrics being added before end of Q4
What's been missing in the Shared Services industry to date, Murphy says, is an interpretation of benchmarking data within the context of different operating environments, parameters of costs structure, market environments and operational models. These all differ enormously between companies.
“In truth, you may find that a 9$ cost per pay slip in industry X and country Y is actually more 'efficient' than an 8$ cost per pay slip in industry A and country B,” Murphy says. In other words, a company with unfavorable conditions and costs might be more efficient than a company operating under favorable conditions and costs. “But simple numbers in isolation won't tell you that,” he adds.
“It’s this that renders traditional metric benchmarking projects so unsatisfactory – and the Metric Intelligence Hub so exciting.”
Find out more: SSON’s Metric Intelligence HubTM
Additional reading: Selection Bias and the Perils of Benchmarking, HBR, Jerker Denrell
New MIH metrics being added next week:
1 Percentage of succession plans in place
2 Percentage of employees with formal training and development plan
3 Involuntary Attrition Rate
4 Attrition in Year 1 hires
5 Days to Close
6 Number of business days to resolve an invoice dispute case/ticket
7 Reporting cycle time External reporting
8 Number of Active General Ledger Accounts
9 Payroll Accuracy Rate Electronics Version
10 Payroll Accuracy Rate Manual Version
- Should You Consider Healthcare Risks When Choosing A Shared Service Location?
- Why Financial Benchmarking Really Matters
- How Philippine Delivery Centers are Deploying Success Levers in the Year Ahead
- ESTABLISHING A CENTER OF EXCELLENCE (COE) IN AUTOMATION – 4 Critical Ingredients
- Dummy Blog
- Dummy Blog
- Data readiness – a precursor to realising the true potential of automation
- 3 Trends Confirmed at SSON’s European Flagship Event
- Top 3 Trends in European Shared Services in 2019: How are scope, outsourcing and automation strategies adapting – and why?
- Higher Education Institutions must look beyond implementation challenges of Shared Services
- What’s driving successful Business Transformation in the Nordics right now?
- What does good really look like? Benchmark your SSO against 2 different benchmarking datasets
- Business Continuity Plan alert: Q3 2018 Earthquakes now at a record high
- Shared Services in Deutschland: Fokus auf Produktivität und Working Capital
- German Shared Services Focused on Productivity and Working Capital
- Open for Business: Consistent figures for 2018 US Shared Services jobs market
- How to succeed with Intelligent Automation: what our surveys tell us
- Is "gender" impacting shared services careers?
- Chief of Payroll, UNICEF Global Shared Services Centre
- Is a profitable business necessarily a prosperous one?
- Accounts Payable: the case for automation and offshoring