What is deepfake and should I be concerned about it?

Deepfake is a technique for human image synthesis based on artificial intelligence. The technology is used to produce or alter video content so that it presents something that didn’t, in fact, occur. The term was first coined in 2017 by a Reddit user who used deep learning technology to edit the faces of celebrities onto adult movie stars video footage – a practice that has now been banned across the web.

The AI learns what a person’s face looks like and transposes it into someone else’s expressions.

Because deepfakes are created through AI, they don’t require much skill and time to create a realistic video. Today anyone can download deepfake software and create convincing fake videos in their spare time. In fact, right now you can download an open source program and, with a powerful computer, get started creating your very own deepfakes as you read this article.

Unfortunately, the downside to this technology is that anyone can create a deepfake video, which could have dire consequences. The possibility to spark international outrage from a video of a world leader issuing an emergency alert stating that an attack was imminent, is certainly a risk for example. If people begin to take these deepfake videos at face value, there is a risk that people will stop trusting any video content altogether.

The psychology behind why this technology has the potential to be so powerful stems from the fact that human beings seek out information that supports what they want to believe and ignore the rest. Exploiting that human tendency gives malicious actors a lot of power. We see this already with misinformation (so-called “fake news“) that creates deliberate falsehoods that then spread under the guise of truth. By the time the information has been properly fact checked many people already believe it. It is therefore imperative that in order to avoid spreading misinformation you use respected sites, such as the BBC, Le Monde, or the Financial Times, to make sure you are reading and sharing legitimate news stories.

How to detect manipulated videos

However, while the capabilities of machine learning to produce deepfakes is evolving, so too is the software that detects it. The apps that create deepfakes also do not allow you to upload your face or onto any politicians for risk of defamation before an election. Additionally, with famous people it is often easy to find the original piece of footage and it is more difficult to do a convincing deepfake if the quality of footage is poor, meaning older videos are less likely to be impacted. In addition, social media platforms are spending millions of pounds on research to help detect manipulated content, with Facebook, for example, committing £5.8 million in September 2019.

Ultimately spreading awareness that videos can be faked in this way is the key to avoiding being caught out or outraged by a video of someone famous. The best advice with political statements is always to wait and see if the news reports that a video has been deepfaked. And, if you choose to make your own, make sure you carefully read the app’s terms and conditions to fully understand what it does with your data and if you can request to have your photo deleted from any database where it might be stored.

What do you think about deepfake technology? Let us know in the comments below. You can also check out our other similar blogs here: