14 May 2025
by David Sully

From Pilot to Scale: Why AI Trust is the Missing Link

Guest blog from David Sully, CEO & Co-Founder of Advai, as part of our #SeizingTheAIOpportunity campaign week 2025.

As AI technologies have evolved since 2020, a critical challenge has emerged alongside their growing capabilities: establish genuine trust. For organisations looking to implement AI solutions, the question of safety and security has become increasingly central to decision-making.  

In just a few years, AI has undergone a seismic shift. The rise of generative AI - exemplified by frontier models - has unlocked extraordinary capabilities. But with these advances comes a stark contradiction: while models are growing in power, they're not becoming proportionately more secure or predictable. 

This disconnect is more than a theoretical risk. 

A recent evaluation demonstrated how leading AI models could be manipulated into approving fraudulent loan applications using just three well-crafted sentences. No specialised hacking skills required - just some cleverly worded prompts. This kind of manipulation, known as "jailbreaking," is disturbingly simple and consistently effective across the industry. 

So, while the capabilities race accelerates, a troubling reality emerges: organisations are unsure whether they can trust these systems to make high-stakes decisions. 

The Deployment Paradox  

Walk into almost any corporate boardroom today and you’ll hear confident talk of AI pilots. But ask to see examples of AI at scale - making real-world decisions - and things get quiet. This is what we call the deployment paradox. 

Despite the buzz, most organisations are stuck in perpetual pilot mode. They’ve tested AI. They might even have a few proofs of concept. But they haven’t scaled. Why? Because they haven’t solved the trust problem. And trust becomes critical at the point where AI starts impacting the bottom line. A CIO in financial services told us recently: “We want AI to review loan applications, but how do we trust its judgement when we’ve seen how easily these systems can be steered off course?” That concern isn't isolated - it's echoed across every sector. 

Closing the Capability-Trust Gap 

The good news is we can close the gap between what AI can do and what we can trust it to do. The key is AI assurance. 

AI assurance doesn’t mean chasing perfection - it means getting to systems that are reliable, testable, and explainable. 

Think of it like a car. You don't expect your car to handle every situation flawlessly. But you do expect to understand its limitations, know when it's at risk of breaking down, and have clear warning signals when something's wrong. 

Effective AI assurance typically operates across three complementary levels: 

  1. Technical Evaluation 
    We test AI models against specific use cases to understand how easily they can be manipulated or behave unpredictably. These tests highlight weaknesses that conventional security testing often misses. 

  1. Business-Aligned Assessment 
    It’s not just about the tech. AI must also meet business requirements - compliance, governance, ethical standards. We help organisations create a trust profile backed by quantifiable evidence, not just vendor promises. 

  1. Ongoing Monitoring 
    AI threats evolve rapidly. That’s why assurance requires real-time defence and detection mechanisms to catch failures and malicious inputs as they happen. 

By following this layered approach, organisations can build a clear picture of their AI systems' strengths, limitations, and risks - one they can confidently share with regulators, partners, or their own board. 

What Leading Organisations Are Doing Differently 

So, who is successfully deploying AI at scale? 

The organisations leading the way have one thing in common: they’ve embedded testing and assurance into their AI development lifecycle. They don’t treat security as an afterthought or rely on generic policies. They use specialist technology and rigorous evaluation. 

Crucially, these organisations are asking a more mature question. Not “Can we trust AI?” but “How do we verify that this AI system is trustworthy for this purpose?” 

Forward-thinking teams in the AI assurance space are increasingly collaborating with regulators to help shape the future of AI governance. Those leading this movement compliance not as a burden but as a chance to build safer, stronger systems and create clarity in a still-uncertain landscape. 

The Path Forward 

The organisations that continue to treat AI as a shiny object without addressing its risks are vulnerable - exposed to manipulation, poor decisions, and operational failure. But those who invest in assurance and robust testing are unlocking real value - competitive advantage, increased efficiency, and innovation they can rely on. AI won’t realise its full potential until we move past blind trust and adopt a new mindset: assess, assure, and adapt. 

That’s the future of trustworthy AI - and it's already being built. 


ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.

AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.

Upcoming AI events

Latest news and insights

Subscribe to our AI newsletter

AI updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

 

Authors

 David Sully

David Sully

CEO & Co-Founder, Advai

David Sully is the CEO and co-founder of Advai, a world leader in testing, evaluating and assuring AI systems. He is an experienced diplomat with over 10 years of experience in problem solving, dealing with technology challenges, international negotiations, arms controls, future threats and cross-government projects.