Government invites views on interventions to secure Artificial Intelligence

Artificial intelligence (AI) is transforming our daily lives, and its adoption offers great opportunity, but in the race to deploy new capabilities, cyber security risks to AI systems are often not addressed. To help remedy this, and to ensure that the many benefits of AI can be realised, the Department for Science, Innovation and Technology has set out specific interventions that it’s seeking feedback on.

Government is proposing to take forward a two-part intervention in the form of a voluntary Code of Practice (the Code) which will be taken into a global standards development organisation for further development. The proposed Code aims to provide practical support to developers on how to implement a secure-by-design approach as part of their AI design and development process. Based on the National Cyber Security Centre’s Guidelines for Secure AI Systems Development, the Code sets baseline security requirements for all AI technologies and distinguishes actions that need to be taken by different stakeholders across the AI supply chain.

The Call for Views on the Cyber Security of AI will close on Friday 9th August.

techUK will be submitting a response on behalf of members. If you would like to contribute to techUK’s response, please contact Jill Broom at [email protected].

Members are also encouraged to submit your own response to the Call for Views.

You can view the research reports that government has published to support this Call for Views here.

Please note that government is also consulting on a second – and closely linked – code of practice which sets out requirements for developers to make their software resilient against tampering, hacking and sabotage. You can find out more about the Call for Views on the software vendors code of practice here.

More on the proposed cyber security of AI code of practice and global standard

Cyber security is an essential precondition for the safety of AI systems and is required to ensure, amongst other things, the privacy, reliability and the secure use of models. Furthermore, government recognises that it is imperative that we collaborate with international partners to achieve consensus on baseline security requirements. Government’s Call for Views document sets out a proposed two-part intervention to (1) create a voluntary code of practice that will (2) be used as the basis for the development of a global technical standard.

The scope of the voluntary Code of Practice and proposed technical standard includes all AI technologies, including frontier AI, and focuses on the entire AI lifecycle as well as addressing the cyber security risks to AI, rather than wider risks related to AI such as those that stem from AI.

Government’s intention, subject to the feedback of this Call for Views, is to submit an updated voluntary code of practice to the European Telecommunications Standards Institute (ETSI) in September 2024 to help inform the development of this global standard on baseline cyber security of AI systems and models. It says that this Call for Views is only the start of the process for contributing to this work.

The proposed voluntary code of practice is based on the NCSC’s Guidelines for Secure AI System Development and sets out practical steps for stakeholders across the AI supply chain – especially developers and system operators – to protect end users. The Code begins by defining the audience/stakeholders that it is intended for, as well as the terminology it uses, before setting out the core principles that the stakeholders must follow.

The 12 Code of Practice principles (of design, development, deployment and maintenance) are:

Secure Design

  1. Raise staff awareness of threats and risks.
  2. Design your system for security as well as functionality and performance.
  3. Model the threats to your system.
  4. Ensure decisions on user interactions are informed by AI-specific risks.

Secure Development

  1. Identify, track and protect your assets.
  2. Secure your infrastructure.
  3. Secure your supply chain.
  4. Document your data, models and prompts.
  5. Conduct appropriate testing and evaluation.

Secure Deployment

  1. Communication and processes associated with end-users.

Secure Maintenance

  1. Maintain security updates for AI model and systems.
  2. Monitor your system’s behaviour.

What would happen once the voluntary Code is up and running?

Government intends to review the Code and, if necessary, update it where there are changes in the technology itself, the risk landscape and regulatory regimes. The proposed Code is voluntary, however, government will continue to work with stakeholders including industry, undertaking monitoring and evaluation of the uptake and effectiveness of the Code, to determine if regulatory action is needed in the future.

You can view the full Call for Views on the Cyber Security of AI document here.

Which cyber security codes of practice are relevant to my organisation?

DSIT has produced several cyber security codes of practice as part of government’s broader approach to improve baseline cyber security practices and cyber resilience across the UK. A modular approach has been developed to help organisations easily identify which codes (and which provisions within the codes) are relevant to them according to their business functions and the types of tech they use or manufacture. In the case of the AI Cyber Security Code of Practice, government’s expectation is that relevant organisations should, at a minimum, also adhere to the provisions in both the Software and Cyber Governance codes of practice. Find out more about government’s cyber security codes of practice here.

Jill Broom

Jill Broom

Programme Manager, Cyber Security, techUK

Dan Patefield

Dan Patefield

Head of Cyber and National Security, techUK

Annie Collings

Annie Collings

Programme Manager, Cyber Security and Central Government, techUK


Related topics