Deepfake Scams Surge by 300%: A Growing Threat to Cybersecurity in 2024

March 6, 2025
Deepfake Scams Surge by 300%: A Growing Threat to Cybersecurity in 2024

Rupert Marais is an in-house Security specialist with expertise in endpoint and device security, cybersecurity strategies, and network management. This discussion will delve into Rupert’s professional background, his thoughts on deepfake technology, and its implications on cybersecurity. We will explore the findings from iProov’s recent report, the tools and techniques used for creating deepfakes, and the challenges traditional security frameworks face in detecting them. Additionally, Rupert will provide insights on how organizations can protect themselves, real-world examples of deepfake scams, and the changing market conditions for deepfake technology. Finally, we will discuss user awareness and training regarding deepfakes.

Can you provide a brief overview of your professional background? How did you become interested in cybersecurity and deepfakes?

I’ve spent over a decade in the field of cybersecurity, focusing on endpoint and device security as well as network management. My interest in cybersecurity was sparked by the growing sophistication of cyber threats and the increasing importance of protecting digital assets. The interest in deepfakes arose naturally as they represent a significant and evolving threat that challenges existing security paradigms and has far-reaching implications.

What exactly is a deepfake? How has deepfake technology evolved over the past few years? Why do you think there has been such a significant increase in deepfake attacks recently?

A deepfake is a synthetic media wherein a person’s likeness is manipulated using artificial intelligence, often to create realistic but fake videos or audio recordings. Over the past few years, deepfake technology has advanced tremendously due to improvements in machine learning algorithms and increased computational power. The significant increase in deepfake attacks can be attributed to the wider availability of sophisticated AI tools, the ease of access to these technologies through various online platforms, and the growing financial incentives for cybercriminals.

Can you explain iProov’s claim of a 300 percent surge in face swap attacks in 2024? What are injection attacks, and why do they pose such a high risk? Can you elaborate on the 2,665 percent spike in the use of virtual camera software for scams?

iProov reported a 300 percent increase in face swap attacks, which involve using deepfake technology to replace someone’s face in real-time during video calls, often to commit fraud. Injection attacks involve injecting fake video or data feeds into verification systems, thereby enabling attackers to bypass facial-recognition checks. These attacks are particularly dangerous because they can circumvent robust security measures. The 2,665 percent spike in the use of virtual camera software relates to its use in manipulating video feeds during online meetings. This makes it harder for security systems to detect fraudulent activities.

What are some of the tools being used to create and deploy deepfakes? How do criminals typically use virtual camera software to carry out these attacks? How hard is it to detect when someone is using such software maliciously?

Tools like generative adversarial networks (GANs) and various AI-based software are commonly used to create deepfakes. Criminals use virtual camera software to inject manipulated video feeds into live calls, making it appear as though they are someone else. Detecting malicious use of such software is challenging due to the high-quality output of these tools and the difficulty in distinguishing between genuine and fake video feeds in real-time.

What challenges do traditional security frameworks face in detecting deepfakes? How can organizations protect themselves against deepfake and identity spoofing attacks? What multiple defensive layers should organizations integrate to better defend against these threats?

Traditional security frameworks struggle with the real-time detection of highly realistic deepfakes and the rapid evolution of attack methods. Organizations can protect themselves by implementing multi-factor authentication, continuous user behavior monitoring, and leveraging AI-based detection tools to identify anomalies. Integrating multiple defensive layers, such as biometrics, device-based authentication, and real-time threat intelligence, can create a more robust defense against these attacks.

Can you provide some examples of high-profile deepfake scams from the past year? What lessons can be learned from the KnowBe4 incident involving a North Korean cybercriminal?

One example is a Hong Kong-based company that lost $25 million to a deepfake attack. The KnowBe4 incident involved a North Korean cybercriminal who managed to deceive interviewers by using AI-enhanced images. The key lesson from these incidents is the importance of robust verification processes and continuous vigilance, even when dealing with plausible and seemingly legitimate interactions.

How has the online market for deepfake technology changed in recent years? What does iProov’s identification of 31 new crews selling identity verification spoofing tools in 2024 say about the market?

The online market for deepfake technology has expanded significantly, with more tools becoming readily accessible to a broader audience. iProov’s identification of 31 new crews indicates a growing and active community dedicated to developing and selling spoofing tools. This democratization of deepfake technology lowers the barrier for entry, making it easier for less skilled criminals to launch sophisticated attacks.

How aware do you think the general public is about deepfake technology and its risks? Based on iProov’s study, why do you think people find it challenging to detect deepfakes?

Public awareness of deepfake technology and its risks is still relatively low. iProov’s study showed that even when people are aware they might be looking at deepfakes, a very small percentage can accurately identify them. This difficulty arises from the high realism of modern deepfakes, the subtlety of the manipulations, and the general lack of training in spotting such forgeries.

Do you have any advice for our readers?

Stay informed about the latest developments in deepfake technology and cybersecurity threats. Regularly update your knowledge and skills through training and awareness programs. Be vigilant during online interactions, and always verify the identity of individuals using multiple methods. Lastly, advocate for and support robust security measures within your organization to protect against these evolving threats.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later