Britain builds surveillance state infrastructure while violent crime falls to historic lows
Predictive policing, facial recognition, and the dismantling of oversight
Shaun Thompson spends his days trying to stop young men from killing each other. As an anti-knife crime activist in London, he mentors teenagers in postcodes where violence once seemed inevitable, talks them away from the crews that might get them killed, advocates for the youth services that keep getting cut. One afternoon, walking through central London, he was stopped by police. A facial recognition camera had flagged him as a suspect. For thirty minutes, Thompson—a man who has dedicated his life to preventing violence—stood proving to officers that he was not the person their algorithm thought he was.
He is now suing the Metropolitan Police. His case illuminates something strange about contemporary Britain: the infrastructure designed to catch killers caught an anti-violence activist instead. But the strangeness runs deeper. Thompson was detained in a city that has just recorded its safest year since records began.
The paradox of declining crime
London logged 97 homicides in 2025. That's 1.1 per 100,000 residents—lower than Paris, lower than New York, lower than Berlin. The rate is the lowest since comparable records began in 1997. Across England and Wales, violent incidents have plummeted 75 per cent from their 1995 peak. Knife crime fell seven per cent last year. Teenage homicides dropped by half. Walk the streets that tabloids describe as war zones and the statistics tell a different story: Britain has become dramatically, measurably safer.
Yet the state is building surveillance machinery at unprecedented scale. The Ministry of Justice is developing what internal documents originally named the Homicide Prediction Project—software to identify future murderers before they act. The Crime and Policing Bill 2025 would hand police access to 52 million driver licence photographs. The Home Office plans to deploy ten new facial recognition vans. The existing Offender Assessment System already churns through 1,300 risk assessments every day, its database swelling past seven million scores.
The gap between falling crime and rising surveillance infrastructure demands explanation. You don't build a murder prediction system because murders are declining. You build it because you want the capability—and once the capability exists, it finds new uses.
Inside the prediction machinery
Strip away the reassuring language and examine what these systems actually do. Documents obtained through freedom of information requests by the civil liberties group Statewatch reveal that the Homicide Prediction Project draws on records for between 100,000 and 500,000 individuals. Not just convicted offenders. Victims. Witnesses. People who reported crimes and found themselves absorbed into a database that might one day flag them as risks.
The data includes names, birthdates, ethnicities. It includes what officials call "health markers"—mental illness, addiction, self-harm. These markers, the documents note, are expected to carry "significant predictive power." A history of depression becomes a data point in calculating your probability of committing murder.
This isn't new. The Offender Assessment System has shaped criminal justice decisions for over two decades—influencing sentences, determining prison categories, affecting parole. Research commissioned by the Ministry of Justice itself found the system works less accurately for Black offenders than white ones. Not a glitch. A feature. Systems trained on data from decades of discriminatory policing learn to replicate that discrimination. Communities subjected to more police attention generate more arrests, which generates more data, which trains algorithms to focus more attention on those communities.
The government insists these projects remain purely experimental, with no operational implications. But the internal documents reference "future operationalisation." The databases are being built. The data-sharing agreements are being signed. Capabilities, once created, rarely sit unused.
The lesson Britain refuses to learn
Forget the science fiction comparisons. The real precedent is closer and more instructive: the credit score.
When algorithmic credit scoring emerged in the mid-twentieth century, its promise sounded familiar. Remove human prejudice. Replace gut feeling with data. Treat everyone by the same objective criteria. The algorithm would be colour-blind in ways humans could never be.
Decades of research documented what actually happened. The algorithms learned from historical lending data—data shaped by redlining, housing discrimination, decades of unequal access to financial services. A landmark study by economists at Stanford and the University of Chicago found credit scores are five to ten per cent less accurate for minorities. Not because the formulas contain explicit bias, but because the underlying data is noisier for populations historically excluded from mainstream finance. The data itself encodes disadvantage.
The pattern compounds. Denied credit, communities fall further behind economically. Falling behind generates data suggesting higher risk. Higher risk justifies denying credit. Researchers call it the ratchet effect—a feedback loop that tightens with each turn. The algorithm doesn't discriminate; it simply processes discriminatory inputs and outputs discriminatory results.
This is not speculation about what might happen with crime prediction. It is documented history of what does happen when you build algorithmic systems atop structural inequality. The Ministry of Justice knows its tools work less accurately for Black offenders. It proceeds anyway. When the National Physical Laboratory found facial recognition systems misidentified Black, Asian, and female faces at significantly higher rates, police forces lobbied to lower the accuracy threshold. The more accurate version, they complained, produced fewer suspects.
The vanishing watchdogs
As capability expands, oversight contracts. The Biometrics and Surveillance Camera Commissioner—the independent office charged with scrutinising police collection and use of biometric data—was effectively abolished under the Data Protection and Digital Information Bill. The position sat vacant from November 2023 to July 2025. The former commissioner, Professor Fraser Sampson, warned that Britain was "moving in the opposite direction" from other democracies. He then took a job as director of Facewatch, a private facial recognition company.
The European comparison is damning. The EU's AI Act, adopted in May 2025, prohibits real-time facial recognition in public spaces except under tightly defined emergency conditions. The European Parliament voted overwhelmingly against predictive policing based on behavioural data. Britain has no dedicated facial recognition law whatsoever. Police deploy the technology under a patchwork of general data protection principles and non-binding guidance, each force deciding for itself when and where to point the cameras.
In 2020, the Court of Appeal ruled that South Wales Police's facial recognition deployment breached privacy and equality laws—the force's framework granted officers effectively unlimited discretion. The ruling changed nothing. The Metropolitan Police scanned nearly five million faces in 2024, double the year before. Civil liberties organisations called for a moratorium. The government announced expansion.
The infrastructure of control
The Crime and Policing Bill does more than enable surveillance. It bans face coverings at protests. It criminalises pyrotechnics at demonstrations. These provisions stack atop the Public Order Act's existing restrictions—the prohibitions on slow marches, on locking-on tactics, on the forms of disruption that make protest visible.
Combine protest restrictions with facial recognition and something shifts. The infrastructure serves purposes beyond catching criminals. It enables identification and tracking of anyone who dissents publicly. South Wales Police compiled a watchlist of over five hundred people for an arms fair in 2018. Climate activists—groups like Just Stop Oil and Extinction Rebellion, movements operating without deep ties to unions or housing campaigns or racial justice organisations—became test cases. The laws developed to address their traffic blockades now exist for broader application. A picket line is a slow march. A housing occupation is locking on.
Officials frame each measure as a response to specific disorder: anti-social behaviour here, protest disruption there, serious violence elsewhere. But the infrastructure outlasts its justifications. Database access, algorithmic risk assessment, inter-agency data sharing—these create permanent capacity. The question isn't what this government intends. It's what any government might do once these tools exist.
The question Thompson's case poses
When Shaun Thompson was detained, the facial recognition system worked exactly as designed. It identified a face matching its parameters and alerted officers. That the match was wrong, that the man it flagged dedicates his life to preventing the violence the system claims to address, that he was stopped in a city safer than it has been in living memory—none of this contradicted the technology's logic. The system calculates probability, not justice. It cannot distinguish a murderer from a man trying to prevent murder. It can only measure resemblance and act.
Thompson's lawsuit will test whether current deployments comply with human rights law. But his case raises harder questions. Does Britain want surveillance infrastructure scaled to a crime problem that no longer exists? Overseen by regulators whose independence has been dismantled? Deployed against communities whose presence in police databases reflects historical discrimination, not contemporary risk?
The government promises these tools will make Britain safer. The data shows Britain is already becoming safer without them. Building them anyway—expanding them, entrenching them, while abolishing the offices meant to constrain them—suggests the real purpose lies elsewhere. Thompson knows this. He works the streets where these systems will concentrate their attention, mentoring the young men who will fill their databases, trying to prevent the violence that justifies ever more surveillance. He was stopped not because the technology failed but because it functioned precisely as intended: treating proximity to risk as indistinguishable from risk itself.