The idea of ranking hospitals, schools, housing associations, and other publicly focused services by performance table seems simple. Obvious, even.

We could create a league table with the best and worst performers , a concept that’s easy to understand by anyone with even a passing interest in sport.

It’s a tried and tested formula.

After all, it’s a matter of objective fact that Liverpool were the best performing football team in the English Premier League of 2024-2025, whilst Southampton were the worst.

Surely the same logic model can be applied to public services?

Sam Spencer wrote an interesting post recently in which he asked:

“Why do senior politicians (from all parties) still think that public sector targets and league tables are a good idea?”

He cited an interview with the UK Secretary of State for Health and Social Care, Wes Streeting, (Listen here from 2:18 – 2:23) where it’s put to Streeting that:

“There’s a risk that trusts will only focus on measures that boost their ranking… There’s a real danger of gaming the system which we often see with public sector targets..”

Streeting dodges this important point and instead asserts that ‘sunshine is the best disinfectant’ as the way to improve outcomes. He also dismisses critics of league tables as ‘elitist’.

So is he right in suggesting that league tables and transparency are the best way of improving performance? And what is the origin story of league tables?

The Problem of the Single Number

The use of league tables and performance metrics in public services is a key part of the “New Public Management” (NPM) movement, which gained traction in Western countries from the late 20th century onwards and which sought to apply private-sector discipline to the public sphere.

The UK’s journey with public service rankings began in the education sector. Since 1992, annual school league tables have been published in England, ranking institutions based on exam results. The stated purpose was to create a quasi-market in education, giving parents the information they needed to choose a school. However, the history of this policy is less a story of clear-cut success and more a chronicle of constant struggle, a series of what have been called ‘metric wars.’  

Policymakers have repeatedly grappled with how to fairly measure a school’s performance, constantly changing the headline metric whilst attempting to account for variable factors such as a school’s student intake.

The methodology has changed at least five times since introduction and this constant evolution reveals a fundamental flaw:

Reducing a complex institution like a school to a single, easily digestible number is a task fraught with statistical and ethical challenges.

And herein lies the problem:

When a metric is complex enough to be fair, it becomes too difficult for the public to understand. When it is simple enough to be understood, it is often too crude to be accurate.  

This same dynamic has played out in the healthcare sector. In 2000, the UK government introduced a star ratings system for hospitals and ambulance services, which was eventually scrapped in 2005 due to concerns about its effectiveness and its crudeness as a tool for patients. And yet, in a repetition of history, a new system of NHS league tables was launched in 2025 with almost identical stated goals. The imperative to demonstrate accountability through simple, public rankings appears to outweigh the empirical evidence of their problematic outcomes.  

When the Race to the Top Becomes a Game

The main criticism of these ranking systems is that they create powerful incentives for organisations to ‘game the system’. When the pressure to perform is intense, people alter their behaviour to improve their ranking without actually improving patient outcomes. In the 1990s, for example, some hospitals employed “hello nurses” whose only job was to greet patients within the first five minutes of their arrival to meet a five-minute emergency waiting time target. The metric was met, but the underlying issue of long wait times was left unaddressed.

This focus on what is measured often leads to measurement fixation, where organisations become so fixated on a narrow set of metrics that they neglect other important, but unmeasured, aspects of care and service.

The origins of public service league tables are a direct result of the belief that governments should be run like businesses. But public services are not conventional businesses. Their purpose—to educate children, house the homeless, and protect the public—is complex, human, and often defies simple quantification.

The analysis of the evidence base reveals that the core tension of league tables lies in the paradoxical relationship between their intended purpose and their impact. While they can drive targeted, quantitative gains in specific areas like waiting times, these improvements are often subverted by a cascade of systemic issues

Most of our public sector services need complete overhaul, and obsessing over optimisation will only go so far, which also takes eyeballs away from system level problems.

Optimising is about building on existing practices and improving efficiency, while innovating is a more discontinuous approach that breaks with established practices and mindsets. While both contribute to overall performance, research has found that optimisation often shows a stronger relationship to perceived performance. This can lead public sector leaders to concentrate on incremental improvements to existing processes rather than pursuing genuinely new, but unproven, ideas.  

This is our classic paradox: the metrics and targets that are intended to drive improvement can create a powerful set of incentives that actually work against the very innovation and risk-taking we need to strive for.

The metric machine gets all the attention and consumes a huge amount of bandwidth – as organisations aim for an upper quartile position.

The innovation machine is too often left flirting with relegation.


Photo by Dexter Fernandes on Unsplash

Paul Taylor Avatar

Published by

Leave a comment