Join us online on 24 April, to hear more from the Alan Turing Institute's Centre for Emerging Technology and Security and industry experts on the topic of AI in national security.
National security bodies face a significant challenge when assessing the risks of AI systems designed in-house. This challenge becomes even harder when AI systems are designed by industry. Research from the Centre for Emerging Technology and Security (CETaS) proposes a framework for AI assurance that is tailored to the specific challenges facing national security bodies and their suppliers.
Involving industry in the design and development of AI is essential if UK national security bodies want to keep pace with cutting-edge capabilities. However, when AI development is outsourced, direct oversight may be reduced, introducing new risks. This research from CETaS introduces a tailored AI assurance framework for UK national security to facilitate robust assessment of whether AI systems meet requirements. The framework centres on a structured system card template for UK national security. This provides guidance on how AI system properties should be documented by AI suppliers and customers - to cover legal, supply chain, performance, security and ethical considerations.
Confirmed speakers:
- Marion Oswald, Senior Research Associate in Safe and Ethical AI, CETaS
- Rosamund, Research Associate, CETaS
- Emily Campbell-Ratcliffe, Head of AI Assurance, Responsible Technology Adoption Unit, Department for Science, Innovation & Technology
- David Green, Director of AI, Adarga
National Security updates
Sign-up to get the latest updates and opportunities from our National Security programme.