20 Apr 2026

The UK’s defence AI sovereignty gap is assurance

The sovereignty gap we are not talking about 

The UK is investing heavily to become a sovereign AI nation. The Sovereign AI Unit is backed by up to £500 million, and the Strategic Defence Review 2025 commits more than £4 billion to autonomous systems. Ministers talk about becoming AI makers rather than AI takers. 

But there is a gap that this investment does not close, and it is rarely discussed. Sovereignty in AI is not just where systems are built or hosted, but whether you can independently verify what they do. 

Data residency is not the same as control 

Most sovereign AI discussions focuse on infrastructure: where data sits, who owns the compute, and where models are hosted. These factors matter, but they don’t define control. The decisive question is whether you can evaluate a system under real operating conditions.  

The Anthropic-Pentagon episode in February 2026 illustrates this clearly. The US military faced losing access to an AI system embedded across its operations, with no adequate substitute and no independent way to evaluate its behaviour. The company behind the model stated that current systems are not reliable enough for autonomous weapons. The US government took a different view. There was no shared basis for resolving that disagreement. 

This is an assurance failure. It means operational dependence can persist even when trust breaks down. If this is a challenge for the US, the UK's position is more exposed. 

What assurance means in practice 

Assurance is often mistaken for compliance. In practice, it means rigorously testing systems under realistic and adversarial conditions. 

This includes methods such as adversarial red-teaming, distribution shift testing, and evaluating whether model confidence aligns with actual accuracy. These are not theoretical concerns. They reflect deployed behaviour. 

In government work, there is often a gap between controlled testing performance and real‑world behaviour. Inputs change, contexts shift, and models fail in ways that are not captured in initial evaluation. In low-stakes settings this is a quality issue. In defence contexts it becomes a safety and sovereignty issue. 

If you cannot characterise these failures yourself, you rely on the vendor. That is not sovereignty. 

The Defence Committee's 2025 report identified a gap between AI ambition and delivery in the Ministry of Defence. Research from the Alan Turing Institute's CETaS programme shows that assurance is still treated as a barrier to pace rather than an enabler of it. This framing is mistaken. Assurance done early is what makes rapid deployment sustainable and defensible. 

The UK has the foundations, but not at scale 

The UK is not starting from zero. There are strong foundations across infrastructure, research, and testing. 

Organisations such as DAIC and specialist AI assurance companies are developing evaluation capability. The AI Security Institute is testing frontier models. Dstl has deep expertise in adversarial AI. 

However, they are fragmented and under‑resourced for the scale of procurement. More importantly, they are not yet framed as a core component of sovereignty or prioritised as such. 

With more than 400 AI initiatives underway across the Ministry of Defence and procurement accelerating, the question of who evaluates these systems and whether they can is becoming urgent. 

Sovereignty requires independent evaluation 

Sovereign data on a system you cannot evaluate is not meaningful control. It is the appearance of sovereignty without the substance. 

If the UK is serious about strategic autonomy in AI, assurance capability must be treated as critical national infrastructure. It needs to be funded accordingly and embedded into procurement, not added after deployment. 

Without that, the UK risks building AI systems it cannot fully understand, cannot reliably test, and ultimately does not control. 

Author

Victoria Childress

AI Ethics & Technical Lead, Kainos



ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

The UK is a global leader in AI innovation, development and adoption.

AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.  

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.


Upcoming AI Events

Latest news and insights

Subscribe to our AI newsletter

AI and Data Analytics updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Kir Nuthi

Kir Nuthi

Head of AI and Data, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

Enquire about membership:

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more