16 May 2025
by Sean Tickle

The AI Cyber Security Code of Practice

Guest blog by Sean Tickle, Cyber Services Director at Littlefish, published as part of our #SeizingTheAIOpportunity Campaign Week 2025

What are the ethical and security challenges of AI, and can the UK’s voluntary AI Cyber Security Code of Practice help organisations balance innovation and risk?

Much the same as with other areas of IT, artificial Intelligence has become an integral part of modern cyber security, playing a crucial role in both defensive and offensive strategies, and offering advanced tools to detect and mitigate threats.

Still, while AI (and earlier machine learning and automation technologies) has been a very positive force when it comes to strengthening our cyber security defences, it also introduces new risks and ethical concerns that must be carefully managed to ensure a secure and responsible digital future.

The UK government’s recent introduction of a voluntary AI Cyber Security Code of Practice seeks to address these issues by providing a framework that balances innovation with responsibility.

A closer look at The AI Cyber Security Code of Practice

In January 2025 – and in response to a call for feedback on the matter – the UK’s Department for Science, Innovation and Technology (DSIT) unveiled the AI Cyber Security Code of Practice. This voluntary initiative acknowledges AI technology’s unique challenges and aims to establish internationally agreed, baseline cyber security principles for the technology (it will be used to help create a global standard in the European Telecommunication Standards Institute (ETSI), for example).

The code also builds on the NCSC’s Guidelines for Secure AI Development, which were published in November 2023 to help in the secure design, development, deployment and operation of AI systems. These guidelines emphasise the importance of adhering to secure by design principles, ensuring that AI systems are robust, transparent, and accountable

In a nutshell, the code is intended to provide guidelines that will secure AI systems throughout their entire lifecycle. This includes ensuring data integrity, system robustness, and addressing transparency and accountability challenges.

To assist in guiding these cyber security requirements, the code defines five stakeholder groups that form the AI supply chain (each group has specific responsibilities to ensure the security of AI systems) and is also structured around thirteen key principles.

The five stakeholder groups described in the code are:

  1. Developers: the individuals or teams responsible for designing and building the system or application.
  2. System operators: the personnel who manage and maintain the live system, ensuring its functionality and uptime.
  3. Data custodians: those responsible for protecting and managing the data used within the system, including access control and data security measures.
  4. End-users: the individuals who directly interact with and utilise the system to achieve their goals.
  5. Affected entities: any external parties or groups that might be impacted by the system’s operation, even if they don’t directly use it. 

The thirteen principles the code is structured around are:

  1. Raise awareness of AI security threats and risks 
  2. Design your AI system for security as well as functionality and performance 
  3. Evaluate the threats and manage the risks to your AI system 
  4. Enable human responsibility for AI systems 
  5. Identify, track and protect your assts 
  6. Secure your infrastructure 
  7. Secure your supply chain 
  8. Document your data, models and prompts 
  9. Conduct appropriate testing and evaluation 
  10. Communication and processes associated with End-users and Affected entities 
  11. Maintain regular security updates, patches and mitigations 
  12. Monitor your system’s behaviour 
  13. Ensure proper data and model disposal 

To accompany the code, the UK government has also published an implementation guide to support organisations as they enhance their cyber defences to meet the practices outlined within. This guide helps implementers understand how each provision can be met and includes some examples to contextualise certain provisions.

Why do we need an AI Cyber Security Code of Practice? 

As touched upon, an AI Cyber Security Code of Practice is necessary because artificial intelligence systems present unique security risks that are distinct from traditional software (e.g., security risks associated with access to company data, model obfuscation, and indirect prompt injection, to name a few). The timing of this initiative is also imperative, since AI technology is rapidly advancing, and the potential risks associated with its misuse are becoming more and more apparent.

An example of this is the way AI’s role in cyber security is inherently dualistic. On one hand, AI offers security professionals enhanced capabilities. It allows us to analyse vast datasets at unprecedented speeds, identifying patterns and anomalies that would be impossible for humans to detect in real-time. This enables quicker threat detection and more effective responses to potential breaches.

However, on the other hand, the same technology is just as accessible to malicious actors as it is to us. This means cyber criminals can leverage AI to develop sophisticated attacks, such as AI-driven malware that adapts to security measures in real-time or phishing schemes that use AI to craft highly convincing fraudulent messages. This escalation creates a continuous ‘arms race’ between defenders and attackers in the cyber security landscape.

Additionally, we mustn’t forget that AI systems themselves can become targets, and dangerously so. Malicious actors may attempt to manipulate the data inputs of AI models — a tactic known as data poisoning — to alter their behaviour in malicious ways. Remember, AI systems are only as intelligent as the data they’re fed.

By encouraging proactive security measures, the code helps us to stay ahead of evolving cyber threats to do with AI and also helps ensure that these technologies are developed and deployed securely, maximising their benefits while minimising risks.

Balancing security and ethics 

To round off this article, there is another compelling reason to embrace the principles of the AI code of practice – and that’s to help organisations navigate the complex interplay of ethics and security when it comes to AI applications.

Key ethical principles in the code include:  

Transparency

Organisations are encouraged to be open about how their AI systems work, including the data they use and the decision-making processes they follow. This transparency helps build trust with users and stakeholders, as it allows them to understand and scrutinise the AI’s actions.

Accountability

The code also emphasises that organisations must take responsibility for the outcomes of their AI systems. This means having mechanisms in place to address any negative impacts or unintended consequences that may arise. By being accountable, organisations can demonstrate their commitment to ethical AI practices and foster a culture of responsibility.

Fairness

AI systems must be designed and trained to avoid biases that could lead to discriminatory outcomes. This involves using diverse and representative data sets, as well as regularly auditing AI systems to identify and mitigate any biases. Ensuring fairness in AI applications helps promote equality and prevents harm to vulnerable groups.

Privacy

Organisations must ensure that AI systems handle personal data responsibly and in compliance with data protection regulations. This includes implementing robust data security measures and giving users control over their data. Respecting privacy not only protects individuals but also enhances the credibility and acceptance of AI technologies.

As we can see, in integrating ethical as well as security principles into AI strategies, organisations can build AI systems that are not only secure but also trustworthy and responsible. This approach fosters user confidence, ensures compliance with regulations, and promotes the development of AI technologies that benefit society. 

The AI Cyber Security Code of Practice provides a valuable foundation for organisations aiming to deploy AI securely, responsibly, and ethically. If you would like to find out more about how Littlefish can assist in securing your IT environment for the adoption of AI tools, please get in touch with our experienced security specialists using the button on this page.


ai_icon_badge_stroke 2pt final.png

techUK - Seizing the AI Opportunity

For the UK to fully seize the AI opportunity, citizens and businesses must have trust and confidence in AI. techUK and our members champion the development of reliable and safe AI systems that align with the UK’s ethical principles.

AI assurance is central to this mission. Our members engage directly with policy makers, regulators, and industry leaders to influence policy and standards on AI safety and ethics, contributing to a responsible innovation environment. Through these efforts, we help build public trust in AI adoption whilst ensuring our members stay ahead of regulatory developments.

Get involved: techUK runs a busy calendar of activities including events, reports, and insights to demonstrate some of the most significant AI opportunities for the UK. Our AI Hub is where you will find details of all upcoming activity. We also send a monthly AI newsletter which you can subscribe to here.

Upcoming AI events

Latest news and insights

Subscribe to our AI newsletter

AI and Data Analytics updates

Sign-up to our monthly newsletter to get the latest updates and opportunities from our AI and Data Analytics Programme straight to your inbox.

Contact the team

Tess Buckley

Tess Buckley

Programme Manager - Digital Ethics and AI Safety, techUK

Visit our AI Hub - the home of all our AI content:

Seizing the AI Opportunity generic AI campaign card.jpg

 

 

Authors

Sean Tickle

Sean Tickle

Cyber Services Director, littlefish