When the University of Worcester hosts its Evidence-Based Policing Conference on June 26, 2026, the keynote won’t come from an academic. It’ll come from Alex Murray OBE, director of threats at the UK’s National Crime Agency. His topic: how artificial intelligence has moved from being a futuristic concept to a daily operational tool in law enforcement.
That trajectory — from experimental to operational — mirrors what’s happening across the criminal justice landscape, from predictive policing algorithms to AI-powered community supervision tools. And the electronic monitoring industry sits squarely in the path of this transformation.
Table of Contents
- The UK’s AI Arms Race: 8 Million Deepfakes and £444K Daily Fraud Reports
- UK Facial Recognition: From Court-Ordered Shutdown to Nationwide App Rollout
- Where Is AI Already Embedded in Criminal Justice?
- What Does This Mean for Electronic Monitoring?
- Why Should Criminal Justice Professionals Be Worried?
- How Should the EM Industry Respond?
- What Can We Expect in the Next Two Years?
The UK’s AI Arms Race: 8 Million Deepfakes and £444K Daily Fraud Reports
The Worcester conference arrives at a moment when Britain is grappling with AI-enabled crime at a scale that would have been unimaginable five years ago. The numbers from Cifas’ Fraudscape 2026 report are stark: over 444,000 fraud cases reported to the National Fraud Database in 2025 — more than 1,200 per day — the highest single-year total ever recorded. Criminals are using generative AI tools and organized networks to scale attacks that previously required human labor for each individual victim.
The deepfake dimension is especially alarming. The UK government estimates that 8 million deepfakes were shared in 2025, up from 500,000 in 2023 — a 16-fold increase in two years. Deepfake fraud attempts in Britain nearly doubled in 2025 (94% increase), the second-highest rate globally after France. And these are not just celebrity face-swaps on social media. National Trading Standards reports that criminals are cloning victims’ voices using AI to authorize fraudulent direct debits — harvesting voice samples through fake “lifestyle survey” phone calls, then feeding those recordings to AI models that reproduce the victim’s voice convincingly enough to deceive bank verification systems.
This is the “crime” side of the AI conference title. The “policing” side is equally contentious.
UK Facial Recognition: From Court-Ordered Shutdown to Nationwide App Rollout
No country illustrates the tension between AI-powered policing and civil liberties more sharply than the UK. The trajectory is remarkable:
- August 2020: The Court of Appeal ruled South Wales Police’s facial recognition use unlawful — finding failures in Data Protection Impact Assessments and Public Sector Equality Duty compliance
- December 2024: South Wales Police and Gwent Police launched a facial recognition mobile app that lets officers identify suspects in the street using their phones in near real-time — becoming the first UK forces to deploy Operator Initiated Facial Recognition (OIFR)
- August 2025: The ICO audited both forces and found them compliant with data protection law — the same forces whose earlier deployment was struck down by the Court of Appeal
- 2025: The Equality and Human Rights Commission (EHRC) was granted permission to intervene in a judicial review of the Metropolitan Police’s Live Facial Recognition practices, arguing they violate human rights law
Meanwhile, a black anti-knife-crime worker was detained by the Met following a false facial recognition match — a case now before the High Court with Big Brother Watch’s support. The Met itself made 61 facial recognition-assisted arrests out of 528 total arrests at the 2024 Notting Hill Carnival.
This is the regulatory landscape that Alex Murray OBE — the conference keynote speaker — navigates daily as the UK’s police lead for artificial intelligence. His title, “Artificial Intelligence: Opportunities and Challenges for Policing,” understates the enormity of the task: deploying AI fast enough to keep pace with AI-enabled criminals, while operating within a legal framework that is still being litigated in real time.
Where Is AI Already Embedded in Criminal Justice?
The short answer: everywhere that data flows, AI follows. The National Institute of Justice (NIJ) is funding machine-learning algorithms that provide real-time guidance to community supervision officers, enabling dynamic risk assessment that updates as new behavioral data arrives — not just at scheduled check-ins.
The IDRACS project (Integrated Dynamic Risk Assessment for Community Supervision), funded by NIJ, found that incorporating detailed criminal history timing and dynamic behavioral measures significantly improved prediction accuracy over traditional static risk instruments. Period-specific models — AI trained on data from the first year of supervision specifically — proved most accurate for identifying individuals likely to reoffend early in their supervision term.

Meanwhile, law enforcement agencies are deploying AI for:
- Predictive resource allocation: Forecasting where crime is most likely to occur so patrol units can be positioned proactively
- Investigation acceleration: Natural language processing that sifts through thousands of case files, social media posts, and surveillance footage to identify patterns
- Digital forensics: AI-driven analysis of seized devices, network traffic, and encrypted communications — the Worcester conference will feature Emma-Jane Scrase, digital investigations manager at West Mercia Police, on precisely this topic
- Cybercrime detection: Machine learning models trained to identify fraud patterns, phishing campaigns, and dark web activity
What Does This Mean for Electronic Monitoring?
GPS ankle monitors generate massive volumes of location, movement, and compliance data. A single device transmitting position fixes every 60 seconds produces 1,440 data points per day. Multiply that across a caseload of 500 supervisees, and a supervision agency is managing 720,000 data points daily — far beyond what human officers can meaningfully review.
This is where AI becomes not just useful but necessary. Intelligent alert management systems can distinguish between a supervisee who missed curfew by three minutes because of traffic and one who has been systematically testing geofence boundaries over three weeks. Pattern recognition algorithms can identify behavioral trajectories that precede absconding events — often days before the actual violation occurs.
The Justice Speakers Institute notes that AI in community supervision helps allocate limited officer resources toward the highest-risk individuals while reducing unnecessary interventions for compliant supervisees. The efficiency gain is real — but so are the risks.
Why Should Criminal Justice Professionals Be Worried?
The Stanford Law School’s 2026 report on AI governance in criminal justice identifies a fundamental problem: criminal justice entities largely lack the technical expertise to rigorously evaluate the AI tools they’re deploying. Vendors provide black-box algorithms with performance claims that agencies cannot independently verify.
The concerns are concrete, not theoretical:
- Biased training data: If an AI risk assessment tool is trained on historical arrest data that reflects decades of over-policing in specific neighborhoods, the algorithm perpetuates that bias — flagging individuals as high-risk based on where they live rather than what they’ve done
- Feedback loops: Research published in the Minnesota Journal of Law & Inequality documents cases where predictive policing algorithms directed more patrols to already over-policed neighborhoods, generating more arrests, which “confirmed” the algorithm’s predictions — a self-fulfilling prophecy
- Due process erosion: When a parole board uses an AI recommendation to deny release, does the individual have the right to challenge the algorithm’s reasoning? In most jurisdictions, the answer is no
- Surveillance creep: Electronic monitoring data originally collected for supervision compliance is increasingly being analyzed for investigative purposes — a use that was never contemplated when the supervisee consented to monitoring
How Should the EM Industry Respond?
Electronic monitoring vendors face a strategic choice. They can chase the AI hype cycle — adding machine learning labels to existing alert management systems — or they can build genuinely useful intelligence into their platforms while maintaining the transparency and auditability that criminal justice demands.
The responsible path requires several commitments:
Explainable AI, not black boxes. Supervision officers and courts need to understand why an algorithm flagged a particular alert as high-priority. “The model detected an anomaly” is not adequate. “The supervisee has approached a protected zone three times in the past week at progressively closer distances, suggesting boundary testing” is actionable intelligence.
Hardware reliability as the foundation. AI is only as good as the data feeding it. If tamper detection sensors produce 15-30% false positives, any AI layer built on top of that data will inherit that noise. Zero false-alarm hardware — like fiber-optic tamper detection, which produces a clean binary signal — provides the clean data foundation that AI requires.
Human-in-the-loop decision-making. AI should prioritize and contextualize alerts, not make autonomous decisions about violations. The final judgment on whether a GPS exclusion zone breach warrants officer response must remain with a trained human.
Data governance by design. GPS tracking data is among the most sensitive data types in criminal justice. Vendors need to build data retention policies, access controls, and purpose limitations into their platforms from the architecture level — not as afterthoughts bolted on to meet procurement requirements.
What Can We Expect in the Next Two Years?
The trajectory is clear. AI will become standard in EM platform software within 24 months. The vendors who gain market share will be those whose AI features produce measurable improvements in supervision outcomes — reduced false alarms, earlier identification of absconsion risk, smarter resource allocation — while maintaining the auditability and explainability that courts require.
Conferences like the University of Worcester event serve an important function: they create space for police professionals, academics, and criminal justice practitioners to interrogate these technologies before they become embedded and unchallengeable. The electronic monitoring industry would benefit from similar forums where vendors, supervision agencies, and civil liberties advocates can jointly develop governance standards for AI-enhanced supervision.
The technology is moving fast. The governance frameworks need to move faster.