skip to content

Whenever I attend an event that David is facilitating I know it will be a lively and interesting meeting. He has immense knowledge of the voluntary and public sectors, always up to date with the ever changing regulations and has a remarkable ability to help everyone at a meeting get the most from it. I highly recommend him

Rikki Arundel

The Vexed Question of Indicators

As my workload around Outcome Based Accountability grows, the importance of selecting indicators (i.e. measures of well-being) which say something of central importance to the desired outcome becomes increasingly apparent. Some indicators are relatively straight forwards and are powerful proxies for the desired outcome. For example, if our desired condition of well-being (outcome) is “All people in our community are healthy” then we know that if people are carrying excessive body weight, then their likelihood of being unhealthy is particularly high. The rate of obesity in a community is therefore a powerful proxy indicator for good health. If we successfully manage interventions to address obesity, we can be relatively certain that more people will enjoy better quality of life and live longer.

Choosing appropriate indicators which communicate well, with the proxy power as described above and for which we have timely and accurate data is essential to quantify the extent of a problem and form the basis for measuring progress in addressing it. But a recent experience working with a local authority looking to address poor mental health in school children has confirmed my understanding of the divisive nature of inappropriate indicators. Data can not only cunningly disguise the nature of a problem, but lead us off on a wholly inappropriate course of action.

If you wanted to measure the well-being of school children in a community, you might well consider that data around the number of “Statements of Special Educational Needs” carried out would help quantify a problem. These statements follow an assessment carried out by the local education authority describing the nature of a child’s special needs and what special help they should receive outside of the school mainstream provision. On the surface, you would expect this to tell you the percentage of children in a given community that have special needs. As a whole population indicator it could be used, by government for example, in prioritising the allocation of resources for special educational needs. Schools could be performance managed on how well they handle the assessment process in terms of timeliness, accuracy and appropriateness. But would this tell us something of central importance to the well being of children in that community? Well probably not actually. Somewhat perversely, the indicator could be telling us how badly a community was at providing good educational services for its children. Why is this so? Because if parents feel their children are not receiving the standard of education they deserve, then a good way of forcing the authority’s hand is to apply for a Statement (which is a statutory document). If schools provided decent services in the first instance, and had the confidence of parents, then the need for assessments would disappear for all but the neediest children. This would be particularly insidious if schools were being penalised for having low numbers of assessments and rewarded for high numbers.

This exposes the vital importance of understanding the “story behind the baseline”; in other words, the drivers or forces at work. We can only develop an effective strategy to address challenging outcomes if we understand the story, and we’ll only do that if we engage effectively with relevant partners including, in particular, parents, service users and other stakeholders.

The same is true of performance measures. A classic example is the call centre that measured the quality of its service by the number of times the telephone would ring before it was answered. On the face of it, a good performance measure. Unless of course no one answered the telephone at all. Managers quickly realised that they’d need to introduce a second performance measure which was “numbers of calls unanswered” to avoid creating a perverse incentive (i.e. if you can’t answer the phone quickly, don’t answer it at all!).