26 Jan 2024
by Daniel Fitter

Detecting deepfakes: A roadmap to UK resilience in the face of GenAI

Guest blog by Daniel Fitter, Directory of Strategy & Transformation at PUBLIC #NatSec2024

In her 2020 book Deepfakes: The Coming Infocalypse, Nina Schick warned that advances in AI will give machines the power to generate wholly sythethetic media, and that this technology will be available to anyone at next to no cost. In less than 3 years, we’ve seen this prediction come to reality. Fueled by the rapid advancement and widespread adoption of Generative AI (GenAI) tools in the last 12 months, deepfakes have gone from largely occupying the space of internet meme culture to emerging as a serious risk to national security. As a result, the UK National Security community should take immediate action to build an effective deepfake monitoring capability to address this growing threat.

Heating up of the deepfake threat landscape

The UK and its allies face a range of national security threats enabled by AI-generated content, in particular deepfakes. These threats are global and cross-border in nature, impacting election integrity, democratic stability, and economic security. In January 2024 the World Economic Forum identified mis/disinformation spread by AI as the largest threat to the world in 2024; and following the UK AI Summit in November 2023 the Government Office for Science highlighted that the development of technologies enabling AI-generated mis/disinformation are outpacing the ability of the national security community to detect and respond to consequential threats.

Many of us will have noticed the rapid increase in both the quantity and quality of deepfake audio/video clips targeting political leaders in recent months. What in the past were often relatively easy to identify as fakes are increasingly more difficult to immediately distinguish from real content. We have seen several examples of this type of attempted interference which are likely to continue to present a range of NatSec threats from election interference (e.g. UK - Keir Starmer, Rishi Sunak) to undermining public trust in leaders, such as towards Singapore’s Prime Minister Lee, to more directly instigating political violence and radicalisation, particularly on the far-right.

What tools do we currently have?

Similarly cybersecurity and other threats in the defence and national security domain, there is a continuous arms race between offensive and defensive actors. At the moment, this balance of power seems to be firmly in favour of bad actors using deepfakes for offensive purposes, with nation states lacking a proven, effective detection and response capability. As such, the UK needs to continue building its resilience to online threats and industrial base to enable better detection and response to these deepfakes and related GenAI threats.

The Safety Tech sector - referring to companies which develop technologies or solutions to facilitate safer online experiences - represents a natural partner for the UK national security community as part of a multifaceted response. In December 2023, Paladin Capital Group published a PUBLIC-led report, International State of Safety Tech 2023, which found that the UK - alongside the US and Australia - is already a global leader in Safety Tech internationally. Indeed, of the 58 Safety Tech firms around the world focused on detecting and disrupting false, misleading, or harmful narratives, nearly 30% (16) are UK-based.

These companies and other researchers have already begun to develop methods for tackling deepfake interference, including by using the following techniques:

  • Deepfake Zoos: Creating a secure data-lake of deepfake content that can be used to quickly identify ‘known’ deepfakes or that can be used as a trusted research environment (TRE) for deepfake detection technology. This is akin to hash matching - when hashes (techniques to create fingerprints of files on a computer system) are compared with another.
  • Radioactive Data: Infiltrating the datasets that large language models (LLMs) - which are used to build GenAI applications - are trained on with data that is more easily identifiable to detection technologies. Meta has shown this is possible and could be done effectively even if only 1% of total training data is ‘radioactive’.
  • Safety by Design in GenAI Development: Safety by design refers to an approach which places user safety and rights at the centre of the design and development of online products and services. As it relates to deepfake/disinformation campaigns, this includes improving understanding of user behaviours and user experience on GenAI applications to create design interventions that reduce the risk that harmful content is generated.
  • Emerging Detection Tools: We have begun to see the development of deepfake detection tools employing various models of detection which demonstrate a range of effectiveness.

What should HMG do to increase our capabilities?

Considering both the range and increasing scale of AI-enabled threats to national security, it is imperative that the UK government continues to take decisive action in developing and catalysing defensive tools. Below, we provide three practical recommendations for the UK Government to tackle these emerging risks more effectively:

  1. HMG should act now, in concert with 5VEY and NATO partners, to identify, procure and scale leading detection solutions and direct R&D/innovative approaches. This may also deliver secondary benefits for the UK prosperity and growth agenda, given likely global demand.
  2. HMG should build and update cross-government mechanisms for secure information sharing of the tools, techniques and procedures (TTPs) of bad actors, as well as available, performant tools to detect current and emerging harm types.
  3. HMG should work collaboratively - both internally and with industry - to develop and deploy targeted capabilities to counter novel threats. This could begin with the development of a joined-up deepfake monitoring and detection capability across data types (i.e. image, video, multimodal), building on lessons learned in child sexual abuse material (CSAM) detection and counter-disinformation.

techUK’s National Security Week 2024 #NatSec2024

The National Security team are delighted to be hosting our annual National Security Week between Monday, 22 January 2024, and Friday, 26 January 2024.

Read all the insights here.

National Security Programme

techUK's National Security programme aims to lead debate on new and emerging technologies which present opportunities to strengthen UK national security, but also expose vulnerabilities which threaten it. Through a variety of market engagement and policy activities, it assesses the capability of these technologies against various national security threats, developing thought-leadership on topics such as procurement, innovation, diversity and skills.

Learn more

National Security updates

Sign-up to get the latest updates and opportunities from our National Security programme.

 

 

 

Authors

Daniel Fitter

Daniel Fitter

Directory of Strategy & Transformation, PUBLIC