This high-level session brings together defence industry leaders, policymakers, and AI ethics experts to explore the critical intersection of AI assurance and defence applications. The event will examine how the defence sector can effectively implement AI assurance mechanisms while addressing unique ethical considerations and safety requirements. The event provides a platform for technology professionals to engage with domain experts to examine current best practices.
Through keynote speeches, expert panels, and practical use cases, participants will gain insights into building trust in AI systems, implementing robust assurance frameworks, and ensuring responsible innovation in defence applications. The event will culminate in a forward-looking discussion on the future requirements for AI assurance in defence, followed by networking opportunities for attendees to connect and share experiences.
This session is particularly timely as the defence sector continues to lead in AI adoption while grappling with complex ethical considerations and the need for rigorous safety standards. The insights and discussions from this event will contribute to a paper which look at the broader understanding of sector-specific AI assurance approaches.
Senior Programme Manager in Digital Ethics and AI Safety, techUK
Tess Buckley
Senior Programme Manager in Digital Ethics and AI Safety, techUK
Tess is a digital ethicist and musician. After completing a MA in AI and Philosophy, with a focus on ableism in biotechnologies, she worked as an AI Ethics Analyst with a dataset on corporate digital responsibility (paid for by investors that wanted to understand their portfolio risks). Tess then supported the development of a specialised model for sustainability disclosure requests. Currently, at techUK, her north star as programme manager in digital ethics and AI safety is demystifying, and operationalising ethics through assurance mechanisms and standards. Outside of Tess's work, her primary research interests are in AI music systems, AI fluency and tech by/for differently abled folks.
Jeremy manages techUK's defence programme, helping the UK's defence technology sector align itself with the Ministry of Defence - including the National Armaments Directorate (NAD), UK Defence Innovation (UKDI) and Frontline Commands - through a broad range of activities including policy consultation, private briefings and early market engagement. The Programme supports the MOD as it procures new digital technologies.
Prior to joining techUK, from 2016-2024 Jeremy was International Security Programme Manager at the Royal United Services Institute (RUSI) coordinating research and impact activities for funders including the FCDO and US Department of Defense, as well as business development and strategy.
Jeremy has a MA in International Relations from the University of Birmingham and a BA (Hons) in Politics & Social Policy from Swansea University.
For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.
AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.
Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.
techUK is on the ground in Seoul as part of the UK delegation at the first International AI Standards Summit. The Summit brought together standards development organisations, governments, and industry around a shared priority: international convergence in AI governance.
On 20 November 2025, Ministers Kanishka Narayan MP and Lord Patrick Vallance published the UK's AI for Science Strategy, a plan designed to position the UK at the forefront of AI-driven scientific discovery. ....... Read more
In March, as part of the Tech Together campaign, techUK convened a panel of researchers and AI technologists to explore how Artificial Intelligence systems shape the experiences of queer people.
Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.
Contact the team
Tess Buckley
Senior Programme Manager in Digital Ethics and AI Safety, techUK
Tess Buckley
Senior Programme Manager in Digital Ethics and AI Safety, techUK
Tess is a digital ethicist and musician. After completing a MA in AI and Philosophy, with a focus on ableism in biotechnologies, she worked as an AI Ethics Analyst with a dataset on corporate digital responsibility (paid for by investors that wanted to understand their portfolio risks). Tess then supported the development of a specialised model for sustainability disclosure requests. Currently, at techUK, her north star as programme manager in digital ethics and AI safety is demystifying, and operationalising ethics through assurance mechanisms and standards. Outside of Tess's work, her primary research interests are in AI music systems, AI fluency and tech by/for differently abled folks.
Suzanne Brink is the Head of AI Ethics & Governance at Kainos. She is the creator and custodian of Kainos’ AI and data ethics approach, ensuring Kainos stays abreast of the latest standards and techniques and advising clients on the same. She has worked extensively with stakeholders in both the public and private sector to bring ethical considerations to AI deployment. Prior to joining Kainos, Suzanne worked in strategic human resource management with roles in Diversity, Equity and Inclusion and as a consultant for a company providing AI-based solutions in recruitment. She holds a Ph.D. in Social Psychology from the University of Cambridge.
I am currently AI Assurance Lead in the Department for Science, Innovation and Technology.
Prior to joining government, I was Project Assistant (Emerging Tech) at Global Partners Digital, managing the delivery of their projects on AI and emerging technologies. Before this, I completed my PhD in international human rights law under the Human Rights, Big Data and Technology project, which explored the implications of data sharing between the NHS and technology companies for the human right to health.
Writing, speaking, advising, and training on AI governance-related issues. I focus particularly on national and international approaches to regulating AI, on the role of standards in supporting regulation, and on how private organisations interested in AI can anticipate and adapt to relevant regulatory change.
Robin Riley has had a diverse career across MOD and wider Government at the interface between science, technology, strategy, delivery, and operations. Robin began his career in MOD's experimental weapons division and has a track record as an innovator and disruptor, including numerous 'firsts' in the adoption of AI, digital, hackathons and more. Robin has expertise in a range of defence-related technologies and has served as an advisor to Ministers on complex and high-profile issues. He specialises in cohering and leading teams with a mix of technical expertise. Robin currently leads the UK's central pathfinding effort to harness AI at scale and pace for Defence, as Head of AI Capability in the Defence AI Centre.