A year ago, Jordan Peele made a video warning us of the impending spike in deepfake technology. In that video, Peele uses machine learning technology to ventriloquize Obama and appear as if Obama is saying outlandish statements. Even if such videos appear as amusing, the implications that deepfake videos present for our future are far from amusing. It means that the average schoolkid will be able to carry out a similar attack in the near future. To make it work, the wannabe swindler will need to find a credulous target and gather enough resources to pull the impersonation trick.
Microsoft and Facebook announced that those who come up with the best deepfake detection algorithm could get a huge prize pool of $10 million, but such projects are not really preventive anymore. Deepfake crimes have already been perpetrated, with a chief executive officer of a UK energy company defrauded of €220,000 in March 2019. This is not a game of impersonation anymore: it is a serious issue.
But before we delve even deeper into the issue, let’s start from a definition.
What Are Deepfakes?
Deepfake comes from deep learning and fake. Research around the issue dates back to 1997, but the term became mainstream in 2017. Deepfake is an AI-based technology used to create fake videos and audio that look and sound real. It takes a person in an existing image, audio recording, or video, and replaces them with someone else’s likeness. It quickly multiplies.
And everyone’s concerns regarding this is that these fakes can be used to sway opinions during an election or implicate a person in a crime. And since they’re already successfully defrauded a British energy firm, their impact is real. Many firms are developing new ways to spot misleading AI-generated media, but detection tools are only a viable short-term solution.
The deepfake arms race is just beginning. And while the risks are clear, how can it affect businesses worldwide?
Deepfakes as Business Threats
Until now, deepfake technology has been centered around its potential for misinformation campaigns and mass manipulation fueled through social media, especially in politics. Yet 2020 might be the year we start seeing deepfakes become a real threat to the enterprise. And cybersecurity defense teams might not be properly equipped to handle this threat. While spearphishing targets high-level employees, tricking them into completing a manual task such as sending emails that don’t contain any suspicious link or attachments, deepfakes have the ability to supercharge these attacks.
Deepfakes in Action
Imagine this: you receive an email from your company’s CEO, asking you to engage in some financial action. Then, you receive a voicemail addressing you by name, referencing previous conversations you’ve had with them—and all in the CEO’s voice. At this point, the attack breaks the truth-barrier and it makes more sense to accept the request as real and authentic than to consider the possibility that it’s fake.
When this deepfake technology will evolve even further, you will take for granted the reality of having a video call with your alleged CEO, even though it might be a deepfake video being created in real time. This actually happened: a CEO was deceived by an AI-generated voice into transferring $243,000 to a bank account he believed to be of a company supplier.
And the only remedy that currently exists is to educate users about these new types of attacks and be on alert for any behavior that seems out of the ordinary. It doesn’t matter how small the gesture is; we’re are in luck at the moment. At its current stage, deepfake videos are easy to tell. The tell-tale signs of a deep fake video could be:
- Slightly unnatural mouth movements
- Confusing shadows
- Lack of blinking
In the future, deepfakes will become more and more real. It is up to developers to create forensic identification systems: what is needed is the equivalent of detecting a photoshopped photo just by looking at the pixels.
Keeping Safe from This Threat
It will take a combination of tech and human defense to combat this new era of fake news. No purely technological solution to the deepfake problem is going to be very effective at this point. Other options exist, for example, you can mitigate the threat with effective communications. You will need to monitor information related to your company and be ready to control the narrative should you face a disinformation outbreak.
Here are some suggestions to prepare your company to face the deepfake threat (and, coincidentally, the same methods for dealing with other types of PR mishaps):
- Minimize channels for company communications
- Drive consistent information distribution
- Develop a disinformation response plan
- Organize a centralized monitoring and reporting system
- Encourage responsible legislation verification
- Monitor the development of detection and prevention countermeasures
Trust is no longer a luxury we can afford. Maintain a healthy dose of skepticism and stay aware.