I’ve spent a lot of time talking about measurement and metrics with clients and colleagues, and sometimes people will comment “You’re quite negative about measurement for someone who specialises in it”, and I think that observation isn’t entirely wrong. The whole point about developing an understanding of a subject is that one gains a greater and greater belief in its importance, but likewise a greater and greater intolerance for it being done ‘badly’.
There are three fairly simple errors I think keep happening which I’d like to share with you. In each case, I’ll try to explain why I think it happens and why it matters
- Measuring too many things
- Measuring what’s available
- Gaming the measurement
There are, of course, far more than that, including some more specific mathematical errors like using Pearson correlation coefficients incorrectly, or weird quirks of statistics like Simpson’s Paradox, but I think these are three which can be reasonably explored and understood in one blog post without needing any graphs or equations…
Measuring too many things
“X is important, so let’s do a lot of it” is an error not confined to measurement, but it’s one that seems extremely common in measurement. We are in a world where we’ve got very excited about the possibilities of data-driven decision making, which makes it awfully tempting to measure everything. The problem is, of course, that once you have started measuring things, you’re going to start paying attention to them - whether or not you really should, and at the expense of other things that are probably more important. We all have limited attention spans, and if you only have three measures, it’s hard to ignore one, but if you have thirty, it’s easy to pay attention to the ones going up, and gloss over those going down.
In the world of Employer Branding, the obvious examples would be things like Employer Rankings - which might be something you should care about, but that depends on what your business actually wants to achieve. If one considers something like the Times Top 100, for example, you’re measuring how much lots of students who didn’t apply to you think that other people should, because that’s pretty much who they ask, and what they ask. If your objective is to have a super-targeted recruitment process, you want that measure to go down, not up, as it’s going to be a measure of inefficiency. If your objective is to be popular, however, great - let’s focus on it.
As a real-world example of someone being ruthless on reducing measurement, I was fascinated recently to read an interview with James Timpson (of Timpson who do shoes and keys) in The Times where he explained they’ve simplified everything down to just three measures - how much each store takes, how happy customer feedback is, and how much is in the bank. And in the process thrown out £8m of electronic point of sale kit that was measuring things nobody needed to know. I was even more fascinated to read the Twitter storm amongst data analysts and BI specialists, who were split between those like me filing it as a case study, and those decrying this as medieval…
Measuring what’s available
Quite often, the things we really would like to measure prove to be expensive - or even impossible - to put a hard number on, whereas things that are pretty clearly less important or even virtually irrelevant may be much easier to enumerate. The temptation to grab onto a measure simply because it’s going to be easy to collect, display and indeed extrapolate can be insurmountable, but it’s not only not a good idea, it can lead to real disaster.
For employers, for example, obviously the key hiring metric should be quality. But how? Quality at interview? Quality in the first few months in the job? Quality over a career with you? That becomes increasingly important but increasingly impossible to measure, and even if you could - increasingly too late… But don’t things like cost of hire or time to hire look easier? And even better, Facebook followers, or clicks on programmatic campaigns. Well yes. But it’s not really what your organisation wants primarily.
Another classic is the number of times someone in internal comms has asked me “What’s a good read rate for our newsletter/intranet content?” To which I will always answer “Depends what you want it to achieve, and if the content’s any good. If it’s crap and serves no purpose, a good read rate is zero” (or words to that effect…) What you really want to measure is the impact of that internal comms, which might be something like productivity.
The danger here is that we start playing to the measures we can measure, not the measures that matter. And the consequences can be utterly catastrophic. This error is often called the McNamara Fallacy, after the US Secretary of Defence during the Vietnam War, who believed fiercely in measurement. He fixated on how many enemies were killed, because that was countable, but refused to consider how Vietnamese people felt about the war, because he couldn’t measure it “so it must not be important”. And this led him to insist the US were winning, because the numbers said so, while this was quite obviously not true.
Gaming the measurement
Even if we’ve done measurements right, there’s a real risk that people interpret the data to suit their own agendas - and this can be done consciously or subconsciously, and we’d tend to call it “confirmation bias”. This can be just selectively paying attention to some of the data, but it can also be about applying a quite unwarranted level of analysis to sparse data that happens to go your way. It’s very easy, and very human, to do this, but can also cause you big problems later on.
Having at various points had the dubious honour of judging awards in our field, this happens a lot when people start writing award entries, and for the most wholesome and understandable of reasons. Someone’s completed a huge piece of work which they, colleagues and partners have worked incredibly hard on and they’re very proud of, and then some horrible individual insists that they need to prove what happened. I’ve never seen an entry that had the courage to say “Our objective wasn’t quantitative. Naff off”, and instead I am forced to pretend that the metrics they’ve declared in the ‘outcomes’ section have anything to do with the objectives they declared earlier on the same form.
There are ethical reasons not to do this, but also really pragmatic ones. If one presents a relentlessly ‘optimistic’ view of what’s happening, one-day things will catch up with you, when that isn’t translating through into real business measures. And actually, sometimes we need to know when things aren’t working, either to change course or get more investment.
As always, my advice starts with simplifying things. Look at what you’re measuring now. Ask yourself “Which of these would the CEO most like me to improve? How about the HRD?” Ask yourself which you can dispose of altogether. And ask yourself how you can do it in a way that doesn’t let you avoid or explain away what’s really happening.
I leave the final words to Anna Rampton, Director of Better at the BBC (according to W1A):
“The fact is, this is about identifying what we do well, and finding more ways of doing less of it better”