Deepfakes: What are They and Why are They a Threat?

  • By Ed Bishop
  • 21 August 2020

According to a recent Tessian survey, 74% of IT leaders think deepfakes are a threat to their organizations’ and their employees’ security. Are they right to be worried? We take a look.

What is a deepfake?

  • What is a deepfake?

    Deepfakes are AI-generated fake videos or audio recordings that look and sound like the real thing. They leverage powerful techniques from machine learning (ML) and artificial intelligence (AI) called deep learning to manipulate or create visual and audio content, in order to deceive people.

How could deepfakes compromise security?

“Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal huge sums of money. 

In the world of cybersecurity, attempts to “hack humans” are known as social engineering attacks. In layman’s terms, social engineering is simply an attempt to trick people. These tactics and techniques have been around for years and they are constantly evolving. 

For example, cybercriminals have realized that the “spray-and-pray” phishing campaigns they previously used were losing their efficacy. Why? Because companies have strengthened their defenses against these bulk attacks and people have begun to recognize the cues that signalled a scam, such as poor grammar or typos. 

As a result, hackers have moved to crafting more sophisticated and targeted spear phishing attacks, impersonating senior executives, third party suppliers, or other trusted authorities in emails to deceive employees. 

Some even play the long game, building rapport with their targets over time before asking them to wire money or share credentials. Attackers will also directly spoof the sender’s domain and add company logos to their messages to make them look more legitimate.

It’s working. 

Last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. While spear phishing attacks take more time and effort to create, they are more effective and the ROI for an attacker is much higher. So, what does this have to do with deep fakes?

Deepfakes – either as videos or audio recordings – are the next iteration of advanced impersonation techniques malicious actors can use to abuse trust and manipulate people into complying with their requests. 

These attacks have proven even more effective than targeted email attacks. As the saying goes, seeing – or hearing – is believing. If an employee believes that the person on the video call in front of them is the real deal – or if the person calling them is their CEO – then it’s unlikely that they would ignore the request. Why would they question it?

Examples of deepfakes

In 2019, cybercriminals mimicked the voice of a CEO at a large energy firm, demanding a fraudulent transfer of £220,000. And, just last month, Twitter also experienced a major security breach after employees were targeted by a “phone spear phishing attack” or “vishing” attack. Targeted employees received phone calls from hackers posing as IT staff, tricking them into sharing passwords for internal tools and systems. 

While it’s still early days and, in some cases, the deepfake isn’t that convincing, there’s no denying that deepfake technology will continue to get better, faster, and cheaper in the near future. 

You just have to look at advanced algorithms like GPT-3 to see how quickly it can become a reality. 

Earlier this year, OpenAI released GPT-3—an advanced natural language processing (NLP) algorithm that uses deep learning to produce human-like text. It’s so convincing, in fact, that a student used the tool to produce a fake blog post that landed in the top spot on Hacker News—proving that AI-written content can pass as human-authored.

It’s easy to see why the security community is scared about the potential impact of deepfakes.. Gone are the days of hackers drafting poorly written emails, full of typos and grammatical errors. Using AI, they can craft highly convincing messages that actually look like they’ve been written by the people they’re impersonating. 

This is something we will explore further at the Tessian HLS Summit on September 9th. Register here.

Who is most likely to be targeted by deepfake scams?

The truth is, anyone could be a target. There is no one group of people more likely than another to be targeted by deepfakes. 

Within your organization, though, it is important to identify who might be most vulnerable to these types of advanced impersonation scams and make them aware of how – and on what channels – they could be targeted. 

For example, a less senior employee may have no idea what their CEO sounds like or even looks like. That makes them a prime target. 

It’s a similar story for new joiners. Hackers will do their homework, trawl through LinkedIn, and prey on new members of staff, knowing that it’s unlikely they would have met senior members of the organization. New joiners, therefore, would not recognize their voices if they receive a call from them. 

Attackers may also pretend to be someone from the IT team who’s carrying out a routine set-up exercise. This would be an opportune time to ask their targets to share account credentials. 

As new joiners have no reference points to verify whether the person calling them is real or fake, -or if the request they’re being asked to carry out is even legitimate – it’s likely that they’ll fall for the scam. 

How easy are deepfakes to make?

Researchers have shown that you only need about one minute of audio to create an audio deepfake, while “talking head” style fake videos require around 40 minutes of input data

If your CEO has spoken at an industry conference, and there’s a recording of it online, hackers have the input data they need to train its algorithms and create a convincing deepfake.

But crafting a deepfake can take hours or days, depending on the hacker’s skill level. For reference, Timothy Lee, a senior tech reporter at Ars Technica was able to create his own deepfake in two weeks and he spent just $552 doing it. 

Deepfakes, then, are a relatively simple but effective way to hack an organization. Or even an election.

“76% of U.S. IT leaders believe deepfakes will be used as part of disinformation campaigns in the election.”

How could deepfakes compromise election security?

There’s been a lot of talk about how deepfakes could be used to compromise the security of the 2020 U.S. presidential election. In fact, an overwhelming 76% of IT leaders believe deepfakes will be used as part of disinformation campaigns in the election. 

Fake messages about polling site disruptions, opening hours, and voting methods could affect turnout or prevent groups of people from voting. Worse still, disinformation and deepfake campaigns -whereby criminals swap out the messages delivered by trusted voices like government officials or journalists – threaten to cause even more chaos and confusion among voters. 

Elvis Chan, a Supervisory Special Agent assigned to the FBI who will be speaking at the Tessian HLS Summit in September, believes that people are right to be concerned. 

“Deepfakes may be able to elicit a range of responses which can compromise election security,” he said. “On one end of the spectrum, deepfakes may erode the American public’s confidence in election integrity. On the other end of the spectrum, deepfakes may promote violence or suppress turnout at polling locations,” he said.

So, how can you spot a deepfake and how can you protect your people from them? 

How to protect yourself and your organization from deepfakes

Poorly-made video deepfakes are easy to spot – the lips are out of sync, the speaker isn’t blinking, or there may be a flicker on the screen. But, as the technology improves over time and NLP algorithms become more advanced, it’s going to be more difficult for people to spot deepfakes and other advanced impersonation scams. 

Ironically, AI is one of the most powerful tools we have to combat AI-generated attacks. 

AI can understand patterns and automatically detect unusual patterns and anomalies – like impersonations – faster and more accurately than a human can. 

But, we can’t just rely on technology. Education and awareness amongst people is also incredibly important. It’s therefore encouraging to see that 61% of IT leaders are already educating their employees on the threat of deepfakes and another 27% have plans to do so. 

To help you out, we’ve put together some of our top tips which you and your employees can follow if you are being targeted by a deepfake or vishing attack

  1. Pause and question whether it seems right for a colleague – senior or otherwise – to ask you to carry out the request.
  2. Verify the request with the person directly via another channel of communication, such as email or instant messaging. People will not mind if you ask. 
  3. Ask the person requesting an action something only you and they would know, to verify their identity. For example, ask them what their partner’s name is or what the office dog is called. 
  4. Report incidents to the IT team. With this knowledge, they will be able to put in place measures to prevent similar attacks in the future.

Looking for more advice? At the Tessian HLS Summit on September 9th, the FBI’s Elvis Chan will discuss tactics such as reporting, content verification, and critical thinking training in order to help employees avoid deepfakes or advanced impersonation. 

You can register for the event here

Ed Bishop co-founder and Chief Technology Officer