Department for Science, Technology and Innovation Launches Trusted third-party AI Assurance roadmap
Today the Department of Science, Technology and Innovation launched its Trusted third-party AI assurance roadmap. The roadmap is intended to ensure the widespread adoption of safe and responsible AI across the UK. It acknowledges the UK’s unique position to be a world leader in AI assurance services, only building on our strong offerings in professional services and technology sector. This roadmap focuses on independent companies that check AI systems, rather than internal functions within companies. The following insight provides an overview of the key areas the roadmap covers to support the UK’s AI assurance ecosystem which is crucial to ensure that AI systems are developed and deployed responsibly and in compliance with the law, while increasing confidence in AI systems to support AI adoption and economic growth.
An Overview of the Roadmap: Government actions to address market barriers for AI assurance
This roadmap is focused on third party providers of assurance; these firms play a role in independently verifying the quality and trustworthiness of AI system. The roadmap sets out four immediate steps government will take to spur the growth and improve the quality of the UK's AI assurance market, as committed to in the AI Opportunities Action Plan.
The government is exploring interventions to support a high-quality AI assurance ecosystem by addressing challenges for this trusted third-party assurance market, this includes:
Establishing a consortium of key stakeholders across the tech sector to professionalise the AI assurance market.
Developing a skills and competencies framework for AI assurance to create clear pathways for professional development.
Working with the consortium to map information access best practices between assurance providers and developers to ensure AI assurance providers have the information they need to assure AI systems effectively.
Establishing an AI Assurance Innovation Fund to develop novel AI assurance solutions to future-proof the market and ensure the UK is ready to respond to transformative developments in AI capabilities.
The challenges and solutions proposed by the government to address market barriers are further explained below:
Professionalisation
The challenge: The Roadmap highlights that currently, the quality of goods and services provided by AI assurance companies is unclear, and the quality infrastructure to ensure that assurance providers are supplying high-quality products and services is still developing.
The solution: The UK government will establish an AI assurance profession by convening a consortium of stakeholders including quality infrastructure organisations and professional bodies. In the first year, this consortium will develop foundational elements like a voluntary code of ethics, skills frameworks, and information access requirements for AI assurance providers. Once these building blocks are in place, the consortium will work toward creating professional certification schemes, with AI auditing likely serving as the initial focus due to its relative maturity and critical role in independently verifying AI system trustworthiness.
Skills
The challenge: The Roadmap has identified that providers struggle to find employees with the necessary combination of skills including AI/machine learning knowledge, law, ethics, governance, and standards. While some training exists in individual areas, there's unclear understanding of exactly what skill combinations assurance professionals need, making career pathways into the sector unclear. The sector particularly needs to encourage diversity to effectively challenge AI system assumptions and identify the full range of associated risks.
The solution: The government partnered with the Alan Turing Institute to research AI auditor skills and competencies, using audit as an example of the expertise needed across AI assurance. They found that auditors must evaluate both technical compliance and broader societal impacts, with all roles requiring knowledge of risks, regulations, ethics, and sector-specific expertise. Currently, assurance providers must train auditors in-house due to lack of practical training options and high costs. While relevant skills exist in various occupational standards and programs (like cybersecurity, data science, internal audit), there's no clear pathway specifically for aspiring AI audit professionals.
Information Access
The challenge: The Roadmap discusses how there is currently a lack of access to information about AI systems. Firms being audited may be unwilling to share the required information due to commercial confidentiality concerns, or lack of awareness of the risks their systems pose. Without a clear understanding of the information that is required, they may also fear oversharing information and putting the security of their systems at risk.
The solution: The UK government will work with the consortium to map what information AI assurance providers need access to, including system requirements, inputs/outputs, algorithms, oversight mechanisms, and governance documentation. Different assurance services require varying levels of access from full "white box" to minimal documentation access. Potential solutions include technical approaches like secure evaluation environments, transparency standards like IEEE 7001:2021, and government-backed best practice guidelines for information sharing between firms and assurance providers.
Innovation
The challenge: According to the Roadmap there is a lack of support for the development of innovative testing and evaluation methods. As new transformative capabilities arise, new tools and services will be required to assure AI systems. Innovation in AI assurance is complex and will require inputs from diverse experts, including AI developers. However, there are limited forums for collaborative research and development on AI assurance in the UK. Currently, assurance firms face information asymmetries with AI developers and weak market incentives for investment, limiting their ability to develop effective tools for emerging AI capabilities.
The solution: The UK government is establishing an AI Assurance Innovation Fund to develop new tools and services for assuring highly capable AI systems, addressing the challenge that transformative AI will present novel risks requiring continuous innovation in assurance. Building on the successful 2024 Fairness Innovation Challenge (which awarded over £500,000 for bias auditing solutions) and complementing the AI Security Institute's work on advanced AI security risks, this fund aims to bring together diverse expertise from developers, deployers, and governance experts to foster collaborative R&D and distribute knowledge across the UK's growing assurance ecosystem.
techUK welcomes the roadmap that will support the development of the UK's AI assurance ecosystem which is key to building trust and driving AI adoption. We support the approach being taken that builds on existing assurance expertise and methodologies and prioritises harmonising with international standards while maintaining flexibility for different applications and technical development. techUK stands ready to work with government and the proposed consortium of stakeholders to develop an inclusive, commercially viable AI assurance ecosystem that positions the UK as a leader in AI assurance.
Sue Daley OBE, Director of Tech and Innovation
techUK
Tess Buckley
Programme Manager - Digital Ethics and AI Safety, techUK
Tess Buckley
Programme Manager - Digital Ethics and AI Safety, techUK
A digital ethicist and musician, Tess holds a MA in AI and Philosophy, specialising in ableism in biotechnologies. Their professional journey includes working as an AI Ethics Analyst with a dataset on corporate digital responsibility, followed by supporting the development of a specialised model for sustainability disclosure requests. Currently at techUK as programme manager in digital ethics and AI safety, Tess focuses on demystifying and operationalising ethics through assurance mechanisms and standards. Their primary research interests encompass AI music systems, AI fluency, and technology created by and for differently abled individuals. Their overarching goal is to apply philosophical principles to make emerging technologies both explainable and ethical.
Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.
Sue leads techUK's Technology and Innovation work.
This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy.
In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List.
She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the data agenda in the UK, Sue was co-chair of the UK government's National Data Strategy Forum until July 2024. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.
Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
The UK is a global leader in AI innovation, development and adoption.
AI has the potential to boost UK GDP by £550 billion by 2035, making adoption an urgent economic priority. techUK and our members are committed to working with the Government to turn the AI Opportunities Action Plan into reality. Together we can ensure the UK seizes the opportunities presented by AI technology and continues to be a world leader in AI development.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
At the UK National AI Policy, Infrastructure and Skills Summit 2025, our CEO Julian David OBE shared insights on the UK’s AI progress — from major investments and skills initiatives to the ongoing challenges of ensuring innovation remains ethical, inclusive, and responsible.
Today, the Department for Science, Innovation and Technology (DSIT) announced the launch of the AI Growth Lab, a new proposed framework for a cross-economy sandbox initiative designed to “accelerate innovation and cut bureaucracy in a safe environment.
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi
Head of AI and Data, techUK
Kir Nuthi is the Head of AI and Data at techUK.
She holds over seven years of Government Affairs and Tech Policy experience in the US and UK. Kir previously headed up the regulatory portfolio at a UK advocacy group for tech startups and held various public affairs in US tech policy. All involved policy research and campaigns on competition, artificial intelligence, access to data, and pro-innovation regulation.
Kir has an MSc in International Public Policy from University College London and a BA in both Political Science (International Relations) and Economics from the University of California San Diego.
Outside of techUK, you are likely to find her attempting studies at art galleries, attempting an elusive headstand at yoga, mending and binding books, or chasing her dog Maya around South London's many parks.
Usman joined techUK in January 2024 as Programme Manager for Artificial Intelligence.
He leads techUK’s AI Adoption programme, supporting members of all sizes and sectors in adopting AI at scale. His work involves identifying barriers to adoption, exploring solutions, and helping to unlock AI’s transformative potential, particularly its benefits for people, the economy, society, and the planet. He is also committed to advancing the UK’s AI sector and ensuring the UK remains a global leader in AI by working closely with techUK members, the UK Government, regulators, and devolved and local authorities.
Since joining techUK, Usman has delivered a regular drumbeat of activity to engage members and advance techUK's AI programme. This has included two campaign weeks, the creation of the AI Adoption Hub (now the AI Hub), the AI Leader's Event Series, the Putting AI into Action webinar series and the Industrial AI sprint campaign.
Before joining techUK, Usman worked as a policy, regulatory and government/public affairs professional in the advertising sector. He has also worked in sales, marketing, and FinTech.
Usman holds an MSc from the London School of Economics and Political Science (LSE), a GDL and LLB from BPP Law School, and a BA from Queen Mary University of London.
When he isn’t working, Usman enjoys spending time with his family and friends. He also has a keen interest in running, reading and travelling.
Sue leads techUK's Technology and Innovation work.
This includes work programmes on cloud, data protection, data analytics, AI, digital ethics, Digital Identity and Internet of Things as well as emerging and transformative technologies and innovation policy.
In 2025, Sue was honoured with an Order of the British Empire (OBE) for services to the Technology Industry in the New Year Honours List.
She has been recognised as one of the most influential people in UK tech by Computer Weekly's UKtech50 Longlist and in 2021 was inducted into the Computer Weekly Most Influential Women in UK Tech Hall of Fame.
A key influencer in driving forward the data agenda in the UK, Sue was co-chair of the UK government's National Data Strategy Forum until July 2024. As well as being recognised in the UK's Big Data 100 and the Global Top 100 Data Visionaries for 2020 Sue has also been shortlisted for the Milton Keynes Women Leaders Awards and was a judge for the Loebner Prize in AI. In addition to being a regular industry speaker on issues including AI ethics, data protection and cyber security, Sue was recently a judge for the UK Tech 50 and is a regular judge of the annual UK Cloud Awards.
Prior to joining techUK in January 2015 Sue was responsible for Symantec's Government Relations in the UK and Ireland. She has spoken at events including the UK-China Internet Forum in Beijing, UN IGF and European RSA on issues ranging from data usage and privacy, cloud computing and online child safety. Before joining Symantec, Sue was senior policy advisor at the Confederation of British Industry (CBI). Sue has an BA degree on History and American Studies from Leeds University and a Masters Degree on International Relations and Diplomacy from the University of Birmingham. Sue is a keen sportswoman and in 2016 achieved a lifelong ambition to swim the English Channel.
Visit our AI Hub - the home of all our AI content:
Enquire about membership:
Become a techUK member
Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.
Programme Manager, Digital Ethics and AI Safety, techUK
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.
Prior to techUK Tess worked as an AI Ethics Analyst, which revolved around the first dataset on Corporate Digital Responsibility (CDR), and then later the development of a large language model focused on answering ESG questions for Chief Sustainability Officers. Alongside other responsibilities, she distributed the dataset on CDR to investors who wanted to further understand the digital risks of their portfolio, she drew narratives and patterns from the data, and collaborate with leading institutes to support academics in AI ethics. She has authored articles for outlets such as ESG Investor, Montreal AI Ethics Institute, The FinTech Times, and Finance Digest. Covered topics like CDR, AI ethics, and tech governance, leveraging company insights to contribute valuable industry perspectives. Tess is Vice Chair of the YNG Technology Group at YPO, an AI Literacy Advisor at Humans for AI, a Trustworthy AI Researcher at Z-Inspection Trustworthy AI Labs and an Ambassador for AboutFace.
Tess holds a MA in Philosophy and AI from Northeastern University London, where she specialised in biotechnologies and ableism, following a BA from McGill University where she joint-majored in International Development and Philosophy, minoring in communications. Tess’s primary research interests include AI literacy, AI music systems, the impact of AI on disability rights and the portrayal of AI in media (narratives). In particular, Tess seeks to operationalise AI ethics and use philosophical principles to make emerging technologies explainable, and ethical.
Outside of work Tess enjoys kickboxing, ballet, crochet and jazz music.
The UK has a world-leading regulatory system that supports the economy while protecting the society. However, strategic reforms to the UK’s regulatory regime could help unlock its full potential as a vital catalyst for growth, bringing considerable rewards across industry.