11 Nov 2020

Deepfakes: Curbing the spread of misinformation

Guest Blog: Nicola Milburn, Senior Consultant, Roke

Improved technology and the 24-hour nature of our online world means it’s becoming increasingly difficult to judge if the media we consume is sincere or genuine. Deepfakes, which are synthetic videos and audio recordings, generated using artificial intelligence, are becoming common and have the potential to be malicious.

The main method for creating them involves the training of a generative adversarial network (GAN). This is a type of machine learning (ML) framework where two neural networks compete against each other. One network (a generator) creates deepfaked video candidates and the other network (a discriminator) tries to classify the candidates as either real or fake. By competing against each other, the generator gets better at creating fakes and the discriminator gets better at detecting them.

This technology is making deepfakes more sophisticated and harder to distinguish from real videos. In parallel, commercial tools are making it easier for anyone to make create them.

 

How deepfakes can be dangerous

Researchers have found that falsified news online is likely to spread faster than accurate information.  They speculate this is because there is a bias to sharing negative news over positive news. Falsified news or deepfakes are more likely to meet this criteria and therefore are more likely to be shared.

The potential for deepfakes to spread misinformation across a multitude of areas is vast. From impersonating someone’s voice to commit fraud and fool a colleague, family or friend into transferring money, or damaging the reputation of a business by falsely creating a video from a CEO announcing a major financial loss, or the termination of a partnership with another company.

 

So what can we do about it?

One option involves researching ways to detect deepfakes using ML algorithms. Many deepfakes are created by improving the generation algorithm in sync with the detection algorithm. This means that improvements in detection algorithms are quickly followed by improvements in generation algorithms. Effectively creating a ‘cat and mouse’ chase to develop new algorithms as deepfakes constantly get better! But if researchers can find ‘tells’ in online media, then creators will be forced to continually develop new algorithms.

Another option involves adding a digital signature to prove authenticity of a created video, which will help to flag up when it has been tampered with. Traditionally, incorporating these signatures into live video has been challenging due to variable streaming bit rates. However, Roke innovation teams have recently developed a feature-based authentication measure that can be attached to video streams as a signature. It’s informed by visible elements of the footage, before being generated through ML techniques. Viewers or third parties can then authenticate videos by executing the same algorithm and comparing signatures.

 

Moving forward

Uses for deepfakes aren’t always negative however. Numerous sources have reported that they’re being adapted innovatively to provide a host of positive solutions, including within medicine, entertainment and education. It will be interesting to see how this technology can help to establish provenance and verify what is real rather than just fake.

Tackling the spread of misinformation through text, images or videos requires a group effort, with collaboration from tech companies, lawyers, law enforcement and the general public. At Roke, we believe in improving the world by combining the physical and digital in new ways. That’s why we’ve fostered an environment, through practices like our Innovation Hub, where people can access and use our extensive science and engineering expertise to solve difficult problems, delivering practical results that protect and safeguard in an uncertain world.