04 Sep 2025
by Tess Buckley

Department for Science, Technology and Innovation Launches Trusted third-party AI Assurance roadmap

Today the Department of Science, Technology and Innovation launched its Trusted third-party AI assurance roadmap. The roadmap is intended to ensure the widespread adoption of safe and responsible AI across the UK. It acknowledges the UK’s unique position to be a world leader in AI assurance services, only building on our strong offerings in professional services and technology sector.  This roadmap focuses on independent companies that check AI systems, rather than internal functions within companies. The following insight provides an overview of the key areas the roadmap covers to support the UK’s AI assurance ecosystem which is crucial to ensure that AI systems are developed and deployed responsibly and in compliance with the law, while increasing confidence in AI systems to support AI adoption and economic growth. 

An Overview of the Roadmap: Government actions to address market barriers for AI assurance 

This roadmap is focused on third party providers of assurance; these firms play a role in independently verifying the quality and trustworthiness of AI system. The roadmap sets out four immediate steps government will take to spur the growth and improve the quality of the UK's AI assurance market, as committed to in the AI Opportunities Action Plan.  

The government is exploring interventions to support a high-quality AI assurance ecosystem by addressing challenges for this trusted third-party assurance market, this includes: 

  1. Establishing a consortium of key stakeholders across the tech sector to professionalise the AI assurance market.  
  2. Developing a skills and competencies framework for AI assurance to create clear pathways for professional development.  
  3. Working with the consortium to map information access best practices between assurance providers and developers to ensure AI assurance providers have the information they need to assure AI systems effectively.  
  4. Establishing an AI Assurance Innovation Fund to develop novel AI assurance solutions to future-proof the market and ensure the UK is ready to respond to transformative developments in AI capabilities. 

The challenges and solutions proposed by the government to address market barriers are further explained below: 

Professionalisation 

The challenge: The Roadmap highlights that currently, the quality of goods and services provided by AI assurance companies is unclear, and the quality infrastructure to ensure that assurance providers are supplying high-quality products and services is still developing. 

The solution: The UK government will establish an AI assurance profession by convening a consortium of stakeholders including quality infrastructure organisations and professional bodies. In the first year, this consortium will develop foundational elements like a voluntary code of ethics, skills frameworks, and information access requirements for AI assurance providers. Once these building blocks are in place, the consortium will work toward creating professional certification schemes, with AI auditing likely serving as the initial focus due to its relative maturity and critical role in independently verifying AI system trustworthiness. 

Skills 

The challenge: The Roadmap has identified that providers struggle to find employees with the necessary combination of skills including AI/machine learning knowledge, law, ethics, governance, and standards. While some training exists in individual areas, there's unclear understanding of exactly what skill combinations assurance professionals need, making career pathways into the sector unclear. The sector particularly needs to encourage diversity to effectively challenge AI system assumptions and identify the full range of associated risks. 

The solution: The government partnered with the Alan Turing Institute to research AI auditor skills and competencies, using audit as an example of the expertise needed across AI assurance. They found that auditors must evaluate both technical compliance and broader societal impacts, with all roles requiring knowledge of risks, regulations, ethics, and sector-specific expertise. Currently, assurance providers must train auditors in-house due to lack of practical training options and high costs. While relevant skills exist in various occupational standards and programs (like cybersecurity, data science, internal audit), there's no clear pathway specifically for aspiring AI audit professionals. 

Information Access 

The challenge: The Roadmap discusses how there is currently a lack of access to information about AI systems. Firms being audited may be unwilling to share the required information due to commercial confidentiality concerns, or lack of awareness of the risks their systems pose. Without a clear understanding of the information that is required, they may also fear oversharing information and putting the security of their systems at risk. 

The solution: The UK government will work with the consortium to map what information AI assurance providers need access to, including system requirements, inputs/outputs, algorithms, oversight mechanisms, and governance documentation. Different assurance services require varying levels of access from full "white box" to minimal documentation access. Potential solutions include technical approaches like secure evaluation environments, transparency standards like IEEE 7001:2021, and government-backed best practice guidelines for information sharing between firms and assurance providers. 

Innovation 

The challenge: According to the Roadmap there is a lack of support for the development of innovative testing and evaluation methods. As new transformative capabilities arise, new tools and services will be required to assure AI systems. Innovation in AI assurance is complex and will require inputs from diverse experts, including AI developers. However, there are limited forums for collaborative research and development on AI assurance in the UK. Currently, assurance firms face information asymmetries with AI developers and weak market incentives for investment, limiting their ability to develop effective tools for emerging AI capabilities. 

The solution: The UK government is establishing an AI Assurance Innovation Fund to develop new tools and services for assuring highly capable AI systems, addressing the challenge that transformative AI will present novel risks requiring continuous innovation in assurance. Building on the successful 2024 Fairness Innovation Challenge (which awarded over £500,000 for bias auditing solutions) and complementing the AI Security Institute's work on advanced AI security risks, this fund aims to bring together diverse expertise from developers, deployers, and governance experts to foster collaborative R&D and distribute knowledge across the UK's growing assurance ecosystem. 

techUK welcomes the roadmap that will support the development of the UK's AI assurance ecosystem which is key to building trust and driving  AI adoption. We support the approach being taken that builds on existing assurance expertise and methodologies and prioritises harmonising with international standards while maintaining flexibility for different applications and technical development. techUK stands ready to work with government and the proposed consortium of stakeholders to develop an inclusive, commercially viable AI assurance ecosystem that positions the UK as a leader in AI assurance.

Sue Daley OBE, Director of Tech and Innovation

techUK


Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation


Technology and Innovation programme activities

techUK bring members, industry stakeholders, and UK Government together to champion emerging technologies as an integral part of the UK economy. We help to create an environment where innovation can flourish, helping our members to build relationships, showcase their technology, and grow their business. Visit the programme page here.

 

Upcoming events

10 September 2025

Photonics Sector Meetup

Online Networking
5 November 2025

Tech and Innovation Summit 2025

Central London Conference

Latest news and insights 

Learn more and get involved

 

Tech and Innovation updates

Sign-up to get the latest updates and opportunities across Technology and Innovation.

 

Here are five reasons to join the Tech and Innovation programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team 

Sue Daley OBE

Sue Daley OBE

Director, Technology and Innovation

Laura Foster

Laura Foster

Associate Director - Technology and Innovation, techUK

Rory Daniels

Rory Daniels

Head of Emerging Technology and Innovation, techUK

Chris Hazell

Chris Hazell

Programme Manager - Cloud, Tech and Innovation, techUK

Usman Ikhlaq

Usman Ikhlaq

Programme Manager - Artificial Intelligence, techUK

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Elis Thomas

Elis Thomas

Programme Manager, Tech and Innovation, techUK

Harriet Allen

Harriet Allen

Programme Assistant, Technology and Innovation, techUK

 

 

Related topics

Authors

Tess Buckley

Tess Buckley

Programme Manager, Digital Ethics and AI Safety, techUK

Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.  

Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace. 

Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical. 

Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.

Email:
[email protected]

Read lessmore