Discover our blogs

Aerospace | Cranfield University

Aerospace

Agrifood | Cranfield University

Agrifood

Alumni | Cranfield University

Alumni

Careers | Cranfield University

Careers

Careers | Cranfield University

Defence and Security

Design | Cranfield University

Design

Energy and Power | Cranfield University

Energy and Sustainability

Environment | Cranfield University

Environment

Forensics | Cranfield University

Forensics

Libraries | Cranfield University

Libraries

Libraries | Cranfield University

Manufacturing and Materials

Libraries | Cranfield University

School of Management

Libraries | Cranfield University

Transport Systems

Water | Cranfield University

Water

Homepage / Performance Reporting Measures vs Performance Management Measures – Part 2

Performance Reporting Measures vs Performance Management Measures – Part 2

05/12/2019

Another Problem with % Measures

You may have read my blog from last week comparing Performance Reporting Measures vs Performance Management Measures.

Performance reporting is littered with measures that may appear to carry meaning for some people, but in our observations, have been misleading and impenetrable to many. And certainly don’t help understanding nor how to improve!

Here are some examples of reporting measures that we introduced previously:

  1. % items completed: % implies a ratio – with a numerator and denominator. E.g. % Repairs Completed defined by (Number of Repairs Completed / Total Number of Repair Calls) * 100
  2. % completed within some timeframe: E.g. From a previous blog’s A&E Figures, we saw % A&E attendants seen in 4 hours or under.
  3. Complicated Measure Combinations: E.g. % Forecast Accuracy in Supply-chain
  4. Applying sophisticated statistical treatment to raw performance measures that only stats specialists can read: E.g. Exponentially weighted moving averages
  5. Statistical representation of a population of people or things: E.g. Electric Car Use by Country

There’s one more critical problem with % measures I didn’t mention last time. And this one is particularly mind-bending, even to some of those who have studied Maths!

You start to stumble across the problem when you start drilling down into sub-sets of the data to “better understand what is going on”. So, for example, regarding A&E data, you may want to drill down by hospital and by age-group. You do this at your peril!

But, to keep this light, we’ll select an alternative example from Wikipedia that you can all go take a look at – batting percentages over two years for two baseball players. We could have picked a cricketing example, but who knows what’s happening in the Test in New Zealand right now – well I said I wanted to keep this light!

So here are the baseball figures  – the figures are (number of hits) / (number of “at bats”):

So looking at the individual year’s batting % in each of 1995 and 1996, you’d want to bet on David Justice. BUT! When you look at their combined % for the 2 years, you’d want to go with Derek Jeter. Confused?

I won’t explain this paradox here, since Wiki does a very good job of it – but it is well-known (to some mathematicians and stats guys) as Simpson’s Paradox. It happens because both the numerator and denominator can vary independently.

The ONLY way to resolve this is to have a clear PURPOSE for the business process (Wiki refers to STORY), which will guide the reader on whether to use the aggregated % or the component %s, OR to use an alternative measure altogether.

And I’m pretty sure Dilbert would encourage you to look at the underlying raw data – i.e. number of “at bats” and the “hits” separately (instead of, or, worst case, as well as %’s) if you really want to understand what’s happening!

David Anker

Written By: Cranfield University

Categories & Tags:

Leave a comment on this post:

Sign up for more information about studying master’s and research degrees at Cranfield

Sign up now
Go to Top