National Cyber Security Centre: AI Data Security

With data security now recognised as a critical enabler of trustworthy AI, the UK’s National Cyber Security Centre – in conjunction with the relevant American, Australian and Kiwi national cyber security agencies –  has published guidance outlining best practices for securing data used train and operate AI systems.*

The guidance outlines 10 actions businesses can take when deploying AI solutions to ensure that data is secure throughout the AI lifecycle, encouraging organisations to:

  1. Source reliable data and track data provenance – As much as possible, only use data from authoritative sources and maintain a secure provenance database that is cryptographically signed.
  2. Verify and maintain data integrity Use checksums and cryptographic hashes during storage and transport.
  3. Employ digital signatures to authenticate trusted data revisions – Datasets should be authenticated against quantum-resistant digital signature standards, with original versions of data cryptographically signed, and revisions signed by the person making the change.
  4. Leverage trusted infrastructure – Use trusted infrastructure to support data integrity, security and transparency.
  5. Classify data and use access controls – These controls should be based on sensitivity and necessary protection measures.
  6. Encrypt data – Ensure encryption at rest, in transit or during processing.
  7. Store data securely – Use certified storage devices that enforce NIST FIPS 140-3 [15] compliance.
  8. Leverage privacy-preserving techniques – This could be through employing data depersonalisation techniques, deploying a differential privacy framework, or decentralising learning techniques, such as federated learning, across local datasets.  
  9. Delete data securely Such as through a cryptographic erase, block erase or data overwrite.
  10. Conduct ongoing data security risk assessments – Evaluate the data security landscape, identify risks, and prioritise actions to minimise security incidents.

Alongside this, the publication focuses on three major areas of data security risks in AI systems:

Data Supply Chain - Third party datasets could contain inaccuracies or be malicious. The guidance, therefore, recommends organisations verify data at the point of ingestion, consider using cryptographic signatures and hashes, and require certifications from data providers.

The guidance also notes the particular risk associated with web-crawled datasets, as such data is “substantially less curated” than other large-scale datasets. In such instances, the authors therefore recommend relying on consensus-based approaches, where data reliability is judged by how frequently it features on other webpages, and effective data curation practices.

Maliciously Modified (Poisoned) Data - Data poisoning denotes malicious attempts to corrupt the training process and produce unreliable model behaviour. There are numerous risks associated with poisoned data, which include the potential for disinformation and statistical bias. To avert this risk, the authors suggest organisations invest in anomaly detection, securing data pipelines and training environments, as well as metadata validation and deduplication.

Data Drift - In other words, gradual changes in the data input could, over time, degrade the performance of the AI system. Suggested methods to combat this include, regularly monitoring the performance of both input and output data, continuously retraining models with up-to-date datasets, and utilising deploying quality control tools.

The document concludes emphasising the “paramount importance” of data security when developing and operating AI systems. To ensure the output of AI system is accurate and reliable, it is crucial to deploy these best practices and risk mitigation strategies.

This publication outlines the growing international consensus around secure-by-design AI development. As the UK continues to position itself as a leader in trustworthy AI, aligning with global best practices on data integrity and security will be essential to unlocking innovation, ensuring consumer trust, and maintaining competitiveness.

* The full list of authors were the US National Security Agency’s Artificial Intelligence Security Center (AISC), the US Cybersecurity and Infrastructure Security Agency (CISA), the US Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre (NCSC-NZ), and the UK’s National Cyber Security Centre (NCSC-UK). 


If you have any further questions, please do get in contact with the team:

Daniella Bennett Remington

Daniella Bennett Remington

Policy Manager - Digital Regulation, techUK

Audre Verseckaite

Audre Verseckaite

Senior Policy Manager, Data & AI, techUK

Jill Broom

Jill Broom

Head of Cyber Resilience, techUK

Annie Collings

Annie Collings

Programme Manager, Cyber Resilience, techUK

Raya Tsolova

Senior Programme Manager, techUK


 

techUK's Policy and Public Affairs Programme activities

techUK helps our members understand, engage and influence the development of digital and tech policy in the UK and beyond. We support our members to understand some of the most complex and thorny policy questions that confront our sector. Visit the programme page here.

 

Upcoming events

Latest news and insights 

Learn more and get involved

 

Policy Pulse Newsletter

Sign-up to get the latest tech policy news and how you can get involved in techUK's policy work.

 

 

Here are the five reasons to join the Policy and Public Affairs programme

Download

Join techUK groups

techUK members can get involved in our work by joining our groups, and stay up to date with the latest meetings and opportunities in the programme.

Learn more

Become a techUK member

Our members develop strong networks, build meaningful partnerships and grow their businesses as we all work together to create a thriving environment where industry, government and stakeholders come together to realise the positive outcomes tech can deliver.

Learn more

Meet the team 

Antony Walker

Antony Walker

Deputy CEO, techUK

Alice Campbell

Alice Campbell

Head of Public Affairs, techUK

Edward Emerson

Edward Emerson

Head of Digital Economy, techUK

Samiah Anderson

Samiah Anderson

Head of Digital Regulation, techUK

Audre Verseckaite

Audre Verseckaite

Senior Policy Manager, Data & AI, techUK

Mia Haffety

Mia Haffety

Policy Manager - Digital Economy, techUK

Archie Breare

Archie Breare

Policy Manager - Skills & Digital Economy, techUK

Daniella Bennett Remington

Daniella Bennett Remington

Policy Manager - Digital Regulation, techUK

Oliver Alderson

Oliver Alderson

Junior Policy Manager, techUK

Tess Newton

Team Assistant, Policy and Public Affairs, techUK