I’m thrilled to sit down with Rupert Marais, our in-house security specialist with deep expertise in endpoint and device security, cybersecurity strategies, and network management. With the rapid advancements in artificial intelligence, its role in cybercrime, particularly in phishing and social engineering, has become a hot topic. Today, we’ll explore how AI is shaping the tactics of hackers, the challenges they face in adopting these technologies, and the specific ways AI is already being used to enhance attacks. We’ll also discuss the current landscape of underground markets and what the future might hold for AI-driven cyber threats.
How do you see AI currently impacting phishing attacks, based on recent insights from the cybersecurity community?
Right now, AI is playing more of an evolutionary role rather than a revolutionary one in phishing attacks. It’s not completely transforming the game for hackers, but it’s definitely helping them polish their tactics. For instance, AI is being used to draft more convincing content and localize phishing lures to target specific audiences with better language and cultural nuances. It’s making those scam emails and messages look less suspicious, but it’s not yet automating the entire process or creating brand-new attack methods from scratch.
Why do you think AI hasn’t become a complete game-changer for cybercriminals at this stage?
There are a few hurdles keeping AI from being fully embraced by hackers. First, the computational power needed to run sophisticated AI models is a big barrier—it’s resource-intensive and costly, which doesn’t always align with the quick, profit-driven nature of cybercrime. Second, integrating AI into hacking tools is complex. It involves training models, setting up automated systems, and figuring out how to avoid detection, all of which take time and expertise that many cybercriminals might not have. Lastly, the existing phishing kits and platforms are still incredibly effective and easy to use, so there’s less incentive to switch to something new and unproven.
Can you walk us through some of the specific ways hackers are leveraging AI, even if it’s not for full automation?
Absolutely. Even though AI isn’t running the show, it’s being used in some clever, targeted ways. Audio deepfakes, for example, are being created to impersonate executives—think a fake voice call from a CEO asking an employee to transfer funds. Then there are AI-powered call centers that automate scams, handling hundreds of calls with realistic-sounding voices to trick people into sharing personal info. Video deepfakes are also popping up in scenarios like job interviews, where scammers pose as candidates or recruiters to steal data or money. Lastly, AI voice bots are being used to directly solicit sensitive details like multifactor authentication codes or credit card numbers by mimicking legitimate interactions.
There have been reports of cybercriminals using well-known AI models from major tech companies for their scams. How are these tools being adapted for malicious purposes?
It’s concerning but not surprising that powerful AI models from big tech are being repurposed for crime. These models, often designed for legitimate uses like natural language processing or voice generation, are being tweaked by cybercriminals to create convincing scripts or synthetic voices for scams. For example, a call center might use these models to generate realistic dialogue for automated fraud calls. The accessibility of such technology—often through APIs or open-source frameworks—means that even those without deep technical skills can adapt them for malicious purposes with minimal effort.
Why do you think there’s still so little evidence of AI-driven tools being widely available in underground markets?
The main reason is that the practical adoption of AI by cybercriminals is still in its early stages. The costs of hosting and maintaining advanced models are high, and there aren’t many user-friendly, ready-to-go AI kits for hacking like there are for traditional phishing tools. Plus, integrating AI into a reliable, undetectable attack infrastructure is no small feat—it requires a level of sophistication that many underground players might not have yet. On top of that, discussions in these communities don’t often focus on operational uses of AI, which suggests it’s still more of a concept than a widely used tool.
What is your forecast for the role of AI in cybercrime over the next few years?
I think we’re going to see a gradual but significant increase in AI’s role in cybercrime as costs come down and more accessible, state-of-the-art tools emerge. We’ll likely see more deepfake-enabled impersonation attacks, especially targeting business leaders for financial scams. AI could also fuel disinformation campaigns during critical events like elections or social upheavals, amplifying false narratives at scale. While traditional methods will stick around for a while due to their simplicity and effectiveness, the sophistication of AI-driven attacks will grow, and organizations will need to stay ahead by investing in detection technologies and employee awareness to counter these evolving threats.