…gets done. So the saying goes and so it is in real life organisations. But is that necessarily a good thing? How would we know? Whilst you read this post I would encourage you to think about the measures and evaluations you and your team/department are subjected to at work. Do these measures make sense? Do they have unintended consequences? Do they provide the right incentive to achieve the right type of results in terms of quality and customer expectations?
Pausing on these questions you will likely realise or simply remind yourself that many of the measures and evaluations you are subjected to either make no sense, aren’t really measurable, encourage the wrong type of behaviour or simply miss the point of the product or service your team or business provides. Here are a couple of random examples that we came across recently.
A department with lots of project managers is measured on completing process steps dictated by an enterprise wide project management system. All steps have to be completed, irrespective of project size, creating massive overheads for small projects. A project in their framework requires an initiation report, a milestone report at every checkpoint and a completion report, in addition to reports that go out to stakeholders. None of the internal reports get read by anybody, but the reports are required to release funds for the next stage. Small wonder that the project managers are demotivated?
In another organisation all the measures were based around the number of approvals granted in a particular time period (day, week etc.). This purely quantitative measure meant that data entry mistakes are not corrected and all verification steps were skipped to achieve volume targets. When a subsequent audit looked at the quality of data and potential approvals granted in error, nearly one third of the approvals turned out to be either incomplete or should not have been granted based on the information actually recorded. They had to establish a separate team to go through all historical records to recheck each and every entry.
A crazy as these two examples may sound, everyone we speak to has a similar story to tell. These examples of poor measures are NOT the exception, they are the norm, but why? Why do organisations create measures which so clearly either measure the wrong thing or have obvious adverse side effects in terms of behaviour? Who is to blame? And why do we have to measure everything and everyone, anyway?
The answer may surprise you. We created the excessive measurement culture for a couple of reasons – to manage complexity, to create a meritocracy and to create accountability. Vertical integration, geographic spread and the large capital demands of the railway and oil companies in the late 19th century meant that we had to find ways to remotely control complex organisations. This required three things – standardisation, measurement and communication. Being confined to railroad/car speed and paper meant the volume of information transmitted had to be relatively small and easy to understand by ‘outsiders’ – managers without the requisite detailed technical knowledge. The combination of these needs encourages the creation of simple measures – especially quantitative measures that are simple to capture and simple to understand. Like the number of railway sleepers laid that day. That number abstracts from all the complexity of the job – the terrain, the availability of labour and parts, the weather etc. But it is very easy to capture (just count) and very easy to transmit (just one number).
Our approach to measurement has not changed much since those days, it is only beginning to change just now with the idea of ‘big data’ and the advent of the ‘internet of things’. But mostly, we are still using the same approach – we measure what’s easy to capture and easy to explain. Then we create a rationalisation around that on why this is the right/best measure and how it really captures what we were after in the first place. Here is a common example. Good managers should regularly give feedback to their direct reports, provide clear goals, tasks and priorities and develop their skills through coaching. This much we know from all the academic research around manager effectiveness. But how do we measure that? Where do you even start? How would you capture the timeliness and quality of the feedback provided to employees? And the quality of the experience (setup, tone of voice, language, empathy etc.)? Anyone tasked with creating those measures will throw their hands up after a while and do what everyone else is doing – measure the frequency of 1-on-1 meetings between managers and their employees and create a quarterly/six-monthly performance evaluation process where a form is filled in and a box can be ticked. Job done.
Centralised bureaucracies will always favour simple, quantitative measures that are easy to report on. The notion of quality will either be ignored or quality will be redefined to suit the quantitative measure. There is a certain logic to this – if the complexity of creating the measure or capturing the information exceeds the either the time required to do the actual work or the communication bandwidth available, the measure will have to be simplified. The manager effectiveness scenario fits the former condition, the railway example fits the latter. Bandwidth is not much of an issue nowadays, but capturing relevant information certainly is (although methods have been developed recently that can capture the quality of the communication between two people without analysing the words being said, using tonal analysis alone).
We are not too far from the day when everything we do and say can and will be captured and analysed in real time. In fact, Amazon is already doing this. This in itself does not mean that organisations will be using ‘better’ measures. Creating good measures with the fewest possible unintended consequences is hard. Selling those measures to an audience expecting familiarity and simplicity (distant management, finance, stock market analysts) is even harder. So I would venture that poor measures are here to stay until we lose the need for excessive vertical integration and are able to hide more complexities in the tools we use (this process is underway and will also be driven by the Internet of Things).