18 Aug 2023
by Javahir Askari

Deepfakes and Synthetic Media: What are they and how are techUK members taking steps to tackle misinformation and fraud

The rise of synthetic media and deepfakes presents new opportunities but has also ushered in new challenges in combating misinformation and fraud.  

As pioneers in technology, techUK members are leading the charge both in the development of synthetic media tools that can support businesses and public services as well as taking steps to tackle the malicious use of AI-generated content, from detecting manipulated videos to developing creating innovative solutions. 

Synthetic Media and Deepfakes, what are they?  

Synthetic Media is an all encompassing term to describe any type of content whether video, image, text or voice that has been partially or fully generated using artificial intelligence or machine learning. Types of synthetic media can range from AI written music, text generation such as OpenAI’s ChatGPT, computer generated imagery (CGI), virtual reality (VR), augmented reality (AR) and voice synthesis. 

Deepfakes are a specific subset of synthetic media that focus on manipulating or altering visual or auditory information, to create convincing fake content, ranging from images, audio and video. A common example of deepfake use is videos that replace one person’s face with another. 

Deepfakes can often be created with malicious intent in order to deceive viewers and are becoming highly realistic and difficult to distinguish from genuine recordings or images.  

In practice, deepfakes take the following most common forms: 

  • face re-enactment, where advanced software is used to manipulate the features of a real person's face; 

  • face generation, where advanced software is used to create entirely new images of faces using images of data from many real faces, but which do not reflect a real person; and 

  • speech synthesis, where advanced software is used to create a model of someone’s voice.

The key distinction between synthetic media and deepfakes is that the latter typically involves creating content that appears to be real but is actually fabricated, whereas synthetic media broadly involves generating content for creative or practical purposes without aiming to deceive. 

Benefits of Synthetic Media

Synthetic media have considerable positive implications and opportunities. They can be used for creative expression, advertising, entertainment, education, and more. 

Entertainment: Synthetic media has become common place in the entertainment industries, with CGI used heavily in films. Platforms like Midjourney have also allowed public use of AI to generate realistic image assets which can be used in a range of creative and artistic purposes, from graphic design to storytelling. Adobe’s generative fill feature in Photoshop is used to automatically generate content based on the existing content in the image. It can be used to fill in missing parts of an image in a way that is consistent with the surrounding elements.   

Advertising: Synthetic media is also being used increasingly in the advertising and marketing industries from product visualisation to personalised ads. Companies such as techUK member Amazon have added augmented reality into their mobile app to allow users to place virtual furniture and homeware in their real environment, helping them visualize how products would look in their homes before purchasing.

Healthcare: Synthetic media is being used in medical training simulations, allowing medical professionals to practice procedures on virtual patients. 3D medical imaging solutions have also helped surgeons plan and rehearse complex surgeries. 

Education: Language apps now use AI driven synthetic media to generate interactive language lessons and exercises for learners.

The risk posed by Deepfakes:  

However the rapid advancement of generative AI has also allowed for the proliferation of hyper-realistic videos, voices, and images, commonly known as deepfakes.  There are two broad categories of deepfake risk, the first concerns being deepfaked (ie having your image used in a deepfake video), whereas the second concerns being misled into believing that a deepfake is genuine. 

These digital manipulations can be used for a variety of malicious purposes, including spreading disinformation, impersonating individuals, and perpetrating fraud. As deepfakes become increasingly convincing and prevalent, the need to counter their negative impact is critical.  

The number of deepfake videos published online has risen exponentially, with global verification platform Sumsub stating the number of deepfakes detected in Q1 2023 was 10% higher than in all of 2022, and the majority of these came from the UK. 51.1% of online misinformation also comes from manipulated images.   

So far, deepfakes have already been used to create realistic “revenge pornography” involving celebrities and members of the public, but increasingly they are being used to discredit politicians and business leaders or defraud companies. Wider threats deepfakes pose range from identity theft and privacy concerns to business challenges, reputational damage and eroding trust in media. 

The risk Deepfakes pose to public life:  

With major elections in the UK, USA and EU likely to take place in 2024 Deepfakes are likely to spread. As malicious actors seek to exploit the capabilities of deepfakes, the potential for swaying public opinion, eroding trust in democratic institutions and spreading misinformation about politicians.  

TechUK member Logically.ai recently published a report on the role that generative AI tools such as Chat GPT and Midjourney could have in generating political deepfakes. The company also offers tools dedicated to fact check information and combat fake news online, especially ahead of election cycles.  

The risk Deepfakes pose to businesses: 

Companies, like individuals, are at risk of reputational damage at the hands of deepfakes. But possibly the most worrying risk from deepfakes is the potential to assist criminals in the commission of fraud.  

The ability to look and sound like anyone, including those authorised to approve payments from the company, gives fraudsters an opportunity to exploit weak internal procedures and extract potentially vast sums of money. The schemes would be more sophisticated versions of phishing and business email compromise scams, though harder to detect. Additional risks include the erosion of a brand’s trust and reputation, potential market manipulation, legal and compliance concerns as well as threatening the ability to vet third parties.  

How to address and reduce the risk of Deepfakes: 

To address these issues, companies will need to invest in robust AI detection tools, employee training, authentication mechanisms, and collaborative efforts with industry peers. By taking proactive measures to detect, prevent, and respond to deepfake threats, businesses can maintain their reputation, protect their stakeholders, and contribute to a safer digital ecosystem. 

Though legislation has begun to consider this issue, with the Online Safety Bill criminalising revenge porn, the technology is advancing at an exponential rate, meaning companies are having to develop strategies to defend against the growing risk of deepfakes in real time.  

TechUK members, from start-ups to established industry leaders, are investing in research and development to detect and prevent deepfake proliferation. Recognizing the complexity of the challenge, techUK members are actively collaborating to share knowledge and insights. 

Detection and Mitigation Strategies: Members are developing advanced detection tools and technologies to identify deepfakes and synthetic media across platforms, bolstering digital authenticity and credibility. Some techUK members have already been collaborating with established media outlets to consider watermarking authentic broadcast media content online.  

Some examples of detection technologies that have been developed in recent years and are being used by members include:  

  1. Biological signals: This approach tries to detect deepfakes based on imperfections in the natural changes in skin colour that arise from the flow of blood through the face. 

  1. Phoneme-viseme mismatches: For some words the dynamics of the mouth, viseme, are inconsistent with the pronunciation of a phoneme. Deepfake models may not correctly combine viseme and phoneme in these cases. 

  1. Facial movements: This approach uses correlations between facial movements and head movements to extract a characteristic movement of an individual to distinguish between real and manipulated or impersonated content. 

  1. Recurrent Convolutional Models: Videos consist of frames which are really just a set of images. This approach looks for inconsistencies between these frames with deep learning models 

Public Awareness and Education: A range of members have endorsed or launched intiatives to educate individuals and businesses on the existence of deepfakes and synthetic media, their implications and ways to identify and respond to them. 

Cross-Industry Collaboration: Members are fostering partnerships with government, law enforcement, financial services and other tech companies to address the issue collaboratively 

Tech innovation: techUK members are leading thought leadership and open source initiatives that contribute to understanding the deepfake challenges, tech advancements and potential solutions.  


The Future of Synthetic Media 

AI technology is advancing at an exponential pace, with the UK’s Deputy Prime Minister Oliver Dowden recently highlighting AI as the most ‘extensive’ industrial revolution yet. 

We must then ensure that any legal or policy responses strike a balance between harnessing AI’s immense potential for societal progress while ensuring that safeguards are in place to counter its misuse.  

As we move forward, it becomes crucial for governments, industries, and society to collaboratively shape a comprehensive legal and policy framework. By actively engaging with TechUK member companies at the forefront of deepfake defense, we can promote responsible AI advancement while effectively countering the threat of manipulative content, safeguarding the integrity of elections and fostering a secure digital landscape.

techUK are conducting a new workstream on the use of deepfakes and synthetic media and will be working closely alongside members to address this issue through the Online Fraud Working Group.

If you’d like to find out more about our work on fraud and deepfakes and how to get involved, please contact: [email protected]

 

How techUK members are tackling Deepfakes: 

 

X is already taking steps to address deepfakes in their platform’s rules and policies and go as far as to label posts containing synthetic media: “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.” They follow with further definitions of ‘misleading media’, consider whether the content was shared in a deceptive manner or false context, as well as the likelihood of the content causing widespread confusion on public issues, public safety or cause serious harm.   In May of this year, X announced it added a new feature to tackle deepfakes, through expanding its Community Notes tool. Until now, Community Notes had been available to the public so they can add context to misleading posts, which proved useful to regular users. This detector was expanded to apply to images, so that when users believe that an image is potentially misleading, they will be able to click on the ‘about image’ tab and write additional information.

 

Google 

Google has also spearheaded efforts to safeguard the authenticity of audio content. Malicious actors may synthesize speech to try to fool voice authentication systems, or they may create forged audio recordings to defame public figures. Perhaps equally concerning, public awareness of "deep fakes" (audio or video clips generated by deep learning models) can be exploited to manipulate trust in media.  

Google have made progress on tools to detect synthetic audio — in their AudioLM work, they trained a classifier that can detect synthetic audio in their own AudioLM model with nearly 99% accuracy. The company made advancements in detecting fake audio recordings, employing AI algorithms to distinguish between genuine and manipulated voices and ensure that voice-based interactions remain trustworthy and secure 

In 2018, Google launched the Google News Initiative, a large dataset helping to advance fake audio detection. They made further datasets available to participants in the independent 2019 ASVspoof challengeThe challenge invited researchers worldwide to submit countermeasures against fake speech, with the goal of making automatic speaker verification (ASV) systems more secure. They further released a data set of visual deepfakes that has been incorporated into the Technical University of Munich and the University Federico II of Naples’ new FaceForensics benchmark 

The company are also taking important steps in watermarking and metadata. Watermarking embeds information directly into content in ways that are maintained even through modest image editing. Google are building models to include watermarking and other techniques from the start. Metadata allows content creators to associate additional context with original files, giving you more information whenever you encounter an image. Google ensure every one of their AI-generated images has that metadata.   

Intel 

Intel is leveraging its AI expertise to tackle deepfakes by developing algorithms that detect and mitigate manipulated content. Through machine learning and advanced analytics, Intel is working to provide tools that verify media authenticity using biometrics. Last year they launched a real-time Deepfake Detector, the world’s first real time deepfake detector. The detection platform utilises FakeCatcher algorithm, which analyzes ‘blood flow’ in video pixels to return results in milliseconds with 96% accuracy. 

Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human—subtle “blood flow” in the pixels of a video. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, they can instantly detect whether a video is real or fake.

 

Javahir Askari

Javahir Askari

Policy Manager, Digital Regulation, techUK

Javahir joined techUK in June 2022, as Policy Manager for Digital Regulation.

Prior to joining techUK, she worked at the New Statesman, delivering cross sector policy research and events for their policy supplement. Javahir previously worked as part of the educational programme at the European Commission based in London, engaging with the UK's youth on the Brexit process. 

Javahir holds a BA Politics and International Relations (Hons) from the University of Nottingham, an MA Human Rights from University College London, and a Postgraduate Diploma in Law from BPP University. 

Email:
[email protected]
LinkedIn:
linkedin.com/in/javahiraskari,linkedin.com/in/javahiraskari

Read lessmore

Margherita Certo

Margherita Certo

Head of Press and Media, techUK

Margherita is the Head of Press and Media at techUK, working across all communications and marketing activities and acting as the point of contact for media enquiries.

Margherita works closely with the staff at techUK to communicate the issues that matter most to our members with the media.

Prior to joining techUK, Margherita worked in public relations across technology, public affairs, and charity, designing evidence-based strategic campaigns and building meaningful ties with key stakeholders.

Email:
[email protected]
Phone:
07462 107214

Read lessmore

 

Authors

Javahir Askari

Javahir Askari

Policy Manager, Digital Regulation, techUK