Deepfakes and Disinformation: What impact could this have on elections in 2024?

As more people head to the polls in 2024 than in any previous year, what is the potential impact of deepfakes and disinformation on elections?

Over 40 countries, representing more than half of the global population, are set to hold elections. At the same time, new technologies such as generative artificial intelligence have proliferated and could be used to shape political narratives. 

The rise of synthetic content and AI-generated media, particularly deepfake images, audio, and videos, has both opportunities as well as risks, as explored in this blog post by techUK.  

However, an obvious risk in this election year is the challenge to safeguarding the integrity of democratic campaigns and election results.  

The question is not whether deepfakes will be maliciously deployed to spread disinformation, but rather how effectively will Government’s, the media, technology companies and the general public respond to their use by malicious actors.  

techUK’s members already offer a range of solutions in the deepfake and disinformation space; from detection and mitigation strategies, fact checking services to content provenance and authenticity initiatives.  


What are deepfakes, synthetic media, misinformation and disinformation: 

Synthetic media: an all encompassing term to describe any type of content whether video, image, text or voice that has been partially or fully generated using artificial intelligence or machine learning. Types of synthetic media can range from AI written music, text generation such as OpenAI’s ChatGPT, computer generated imagery (CGI), virtual reality (VR), augmented reality (AR) and voice synthesis. 

Deepfakes: a specific subset of synthetic media that focus on manipulating or altering visual or auditory information, to create convincing fake content, ranging from images, audio and video. A common example of deepfake use is videos that replace one person’s face with another. 

Misinformation: false or inaccurate information—getting the facts wrong.

Disinformation: false information which is deliberately intended to mislead—intentionally misstating the facts.


Impact on Elections

Deepfakes and advanced synthetic media serve as amplifiers of existing threats, including deceptive campaign ads, mis and disinformation, voter suppression, and attacks on election workers.  

These issues predate the emergence of deepfakes and would continue even if AI-generated content was not present. Misinformation has long been a feature of election campaigns around the world. Photoshopped images, memes, and fake audio and video of politicians have been around for decades. 

However recent events, including Slovakia’s election, have seen deepfakes used to spread disinformation and show the tangible impact of deepfakes on election outcomes. AI-generated audio recordings impersonating a liberal candidate circulated two days before polls, creating confusion and eroding voter trust.  

While fact checkers rushed to verify that the audio was fake, the candidate ultimately lost the election. While the effect of the deepfake has not been quantified there is a risk that such fakes could have significant impacts in tight races.  

But is it just the fakes we need to address most urgently? Sensity, a company specialising in deepfake detection and identity verification,  concluded in 2021 that deepfakes had had no “tangible impact” on the 2020 Presidential election. It found no instance of “bad actors” spreading disinformation with deepfakes.  

Even today, while we see instances such as the Slovakia election deepfake or the Keir Starmer deepfake, it is hard to point to a convincing deepfake that has misled people in a tangible or quantifiable way. Arguably, the danger lies not in deepfake videos of politicians but more so in the manipulation of content to manufacture false narratives. 


Eroding Trust in Information

The proliferation of deepfakes introduces challenges to democratic process because they can deprive the public of the accurate information needed to make informed decisions in elections. However, the deeper concern lies in the emergence of the "liar's dividend" — a phenomenon where the very existence of generative AI engenders an atmosphere of mistrust. 

The fear is that the sheer volume of AI-generated content could make it challenging for people to distinguish between authentic and manipulated information. Furthermore, the boom in large language models, and text-to-speech, or text-to-video, software, also speed up the creation of content.  

The erosion of trust in election information becomes a pervasive issue which requires a collaborative, cross sector response.  


The Tech Sector's Role

The tech industry is leading in combatting deepfakes’ and AI generated misinformation’s role in elections, including detection, mitigation and content provenance technologies From cutting-edge AI algorithms to collaborative industry initiatives and policy advocacy, techUK members are actively shaping the future of election security. AI provides tools that can help identify and correct false information, but there is a delicate balance to strike. 

1 - Fact Checking

Fact checking will have a key role in mitigating the impact of deepfakes in upcoming elections. To address the proliferation of AI-driven misinformation, several companies in the sector have introduced fact checking tools. Meta announced that it has mandated the disclosure of AI-generated content in political advertisements on its platforms. Additionally, Google has also developed SynthID, a tool that discreetly integrates a digital watermark into an image's pixels.  

Logically.ai 

Logically.ai actively tackles the harms associated with mis- and disinformation, alongside Logically Facts, one of the world’s largest commercial fact-checking organisations. 

Most of the electoral interference that Logically have observed is a lot more complicated than the odd deepfake. They have noted far more traditional approaches at play, as exemplified in recent elections in Argentina and Taiwan. 

Argentina’s 2023 presidential election was met with narratives circulating on X, propagating Agenda 2030-related conspiracy theories and claiming that the election process was fraudulent. Logically’s proprietary CIB detection model identified a network of 54 X accounts exhibiting coordinated and inauthentic behaviour that amplified the former narrative. 

Logically has several tools already available to identify this kind of inauthentic behaviour. To deal with the challenge of the sheer scale of disinformation that generative AI threatens to unleash, the company has developed a new multilingual, multimodal tool to identify content that is worthy of a fact-checker’s attention, from a dataset of millions of pieces of content. 

In the knowledge that fact-checking is a time-consuming process that is only increasing with the widespread availability of generative AI tools, Logically is accelerating the process of claim extraction. Their tool will process videos into text and then break the content down into statements. These statements can then be scored using proprietary AI models based on their likelihood of constituting a ‘claim’ that can be tested, and ultimately verified or disproven. 

2 - Content Provenance

In February 2021, Adobe, Microsoft, Intel, Arm, BBC, and Truepic launched a formal coalition for standards development: The Coalition for Content Provenance and Authenticity (C2PA). This mutually governed consortium was created to accelerate the pursuit of pragmatic, adoptable standards for digital provenance, serving creators, editors, publishers, media platforms, and consumers. 

This builds on the work of the Content Authenticity Initiative (CAI), which was announced by Adobe in 2019. Today, this group has more than 2,000 members from across industries, including the Associated Press, New York Times, Wall Street Journal, Stability AI, Spawning.ai and Nikon. The cross industry initiative seeks to address misinformation and provides media transparency for better evaluation of content. This is alongside focusing on education and advocacy, prototype implementations in real-world contexts at scale, and developing an engaged community of implementers and users of this technology.

Adobe 

Alongside work through the CAI and the C2PA, to help lead the fight against AI-generated deepfakes, Adobe’s Content Credentials act like a digital ‘nutrition’ label that can show information such as the creator’s name, the date an image was created, what tools were used to create an image and any edits that were made. Content Credentials are built on an open standard so anyone can implement them into their own tools and platforms for free. 

Recently awarded one of Fast Company’s “next big things in tech", the CAI is committed to working together with industry peers and policymakers towards widespread implementation of Content Credentials to bring more transparency to AI-generated content everywhere – particularly before 2024 elections. You can hear more about Adobe’s approach by listening to Adobe’s General Counsel, Dana Rao on the Verge’s Decoder podcast

3 - Detection and Mitigation Tools

Intel 

Intel is leveraging its AI expertise to tackle deepfakes by developing algorithms that detect and mitigate manipulated content. Through machine learning and advanced analytics, Intel is working to provide tools that verify media authenticity using biometrics. Last year they launched a real-time Deepfake Detector, the world’s first real time deepfake detector. The detection platform utilises FakeCatcher algorithm, which analyzes ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy.  

Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human—subtle “blood flow” in the pixels of a video. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, they can instantly detect whether a video is real or fake. 

Microsoft 

Microsoft’s recent report “Protecting Election 2024 from Foreign Malign Influence” flagged that nations may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems. 

As a result, Microsoft announced five new Election Protection Commitments to safeguard voters, candidates and campaigns, and election authorities worldwide. The company launched Content Credentials as a Service, as part of the Coalition for Content Provenance and Authenticity (C2PA). The tool allows parties to authenticate their videos and photos with watermark credentials and intends to help candidates control their own images and messages. 

The company also deployed a new “Campaign Success Team”, to advise and support campaigns as they navigate the world of AI, combat the spread of cyber influence campaigns, and protect the authenticity of their own content and images.  

Finally, the company also created an “Election Communications Hub” to support democratic governments around the world as they build secure and resilient election processes. This hub will provide election authorities with access to Microsoft security and support teams in the days and weeks leading up to their election, allowing them to reach out and get swift support if they run into any major security challenges.

4- Media Literacy

Media literacy plays a vital role in addressing the malicious uses of deepfakes, by empowering individuals to critically evaluate information, discern between authentic and manipulated content, and make informed decisions. It acts as a proactive defense against the negative impact of deepfakes on public trust and the democratic process. 

It will be critical of government to lead the policy around communicating with the public on provenance, information literacy, and what to look for when engaging with political content online. Without media literacy, it is difficult to see how there will be an effective and long-term solution to the deepfakes and disinformation threats.  

Given there are legislative examples in the past which can be used as starting points, techUK and our members believe it’s vital to explore this solution in greater detail. We are also keen to work closer with Ofcom’s Making Sense of Media Advisory Panel, to improve the online skills, knowledge and understanding of the public when it comes to deepfakes and disinformation.  


The Future of Deepfakes and Elections

To effectively tackle the impact of deepfakes, both in the electoral sphere and more widely, inclusive cross sector dialogue to scale out solutions will be vital. Given the global nature of this threat, policymakers will need to encourage cross sector and international cooperation to develop consistent policies and responses. 

By fostering a media-literate population and leveraging technological advancements responsibly, we can also fortify our defences against the manipulation of information. 

Tech companies are already developing tools for exposing fakes, including databases for training detection algorithms and watermarks for digital files. However, human content moderators, media organizations and political parties will also play a crucial role in verifying the authenticity of content, especially in diverse cultural contexts. 

By actively engaging with TechUK member companies at the forefront of deepfake defence, we can promote responsible AI advancement while effectively countering the threat of manipulative content, safeguarding the integrity of elections and fostering a secure digital landscape. 

techUK are continuing to work on deepfakes and synthetic media alongside members through the Online Fraud Working Group. 

If you’d like to find out more about our work on fraud and deepfakes and how to get involved, please contact: [email protected]


Watch our session on "Elections in the Age of Generative AI" from techUK's Digital Ethics Summit


Tech Policy Leadership Conference 2024

Are you interested in the direction of tech policy in the UK ahead of the general election in 2024? techUK are hosting our policy leadership conference in March titled How can the next Government use technology to build a better Britain? 

The conference will feature addresses from the Conservative Party and Labour Party on their ideas for the future of tech policy, as well as the publication of industry research on what the priorities should be for the next government

To learn more about the conference or book tickets, happening Monday 11 March, please click here.


 

Javahir Askari

Javahir Askari

Policy Manager, Digital Regulation, techUK

Neil Ross

Neil Ross

Associate Director, Policy, techUK

Oliver Alderson

Oliver Alderson

Policy and Public Affairs - Team Assistant, techUK

 

Related topics