Cybercriminals are increasingly exploiting gen AI technologies to enhance the sophistication and efficiency of their attacks.
Artificial intelligence is revolutionizing the technology industry and this is equally true for the cybercrime ecosystem, as cybercriminals are increasingly leveraging generative AI to improve their tactics, techniques, and procedures and deliver faster, stronger, and sneakier attacks.
But as with legitimate use of emerging AI tools, abuse of generative AI for nefarious ends isn’t so much about the novel and unseen as it is about productivity and efficiency, lowering the barrier to entry, and offloading automatable tasks in favor of higher-order thinking on the part of the humans involved.
“AI doesn’t necessarily result in new types of cybercrimes, and instead enables the means to accelerate or scale existing crimes we are familiar with, as well as introduce new threat vectors,” Dr. Peter Garraghan, CEO/CTO of AI security testing vendor Mindgard and a professor at the UK’s Lancaster University, tells CSO.
Garraghan continues: “If a legitimate user can find utility in using AI to automate their tasks, capture complex patterns, lower the barrier of technical entry, reduced costs, and generate new content, why wouldn’t a criminal do the same?”
Here is a look at various ways cybercriminals are putting gen AI to use in exploiting enterprise systems today.
Taking phishing to the next level
Gen AI enables the creation of highly convincing phishing emails, greatly increasingly the likelihood of prospective marks giving over sensitive information to scam sites or downloading malware.
Instead of sending a reasonably generic and unconvincing email, often with grammatical and formatting inconsistencies and errors, the use of AI enables cybercriminals to quickly generate more sophisticated and legitimate-looking emails, with the potential for greater personalization to target the recipient.
Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media.
“AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains.
As phishing attacks branch out in kind, AI-generated audio and video deepfakes can be used as part of more sophisticated social engineering attacks. In the most high-profile example to date, a finance worker at design and engineering company Arup was tricked into authorizing a fraudulent HK$200 million ($25.6 million) transaction after attending a videoconference call during which fraudsters used deepfake technology to impersonate its UK-based chief finance officer.
Facilitating malware development
Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware.
For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.
“The loader’s detailed line-by-line description suggesting it was crafted using generative AI,” according to the latest edition of HP Wolf Security’s Threat Insights Report.
In addition, the “design of the HTML webpage delivering XWorm is almost visually identical as the output from ChatGPT 4o after prompting the LLM to generate an HTML page that offers a file download,” HP Wolf Security adds.
Similar techniques were in play with the earlier AsyncRAT campaign, according to HP’s enterprise security division.
Elsewhere, ransomware group FunkSec — an Algeria-linked ransomware-as-a-service (RaaS) operator that takes advantage of double-extortion tactics — has begun harnessing AI technologies, according to Check Point Research.
“FunkSec operators appear to use AI-assisted malware development, which can enable even inexperienced actors to quickly produce and refine advanced tools,” Check Point researchers wrote in a blog post.
Accelerating vulnerability hunting and exploits
The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies.
“Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says.
Gen AI may be behind a 62% reduction in the time between a vulnerability being discovered and its exploitation by attackers from 47 days to just 18 days, according to a recent study by threat intelligence firm ReliaQuest.
“This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes.
Adversaries are leveraging gen AI alongside pen-testing tools to write scripts for tasks such as network scanning, privilege escalation, and payload customization. AI is also likely being used by cybercriminals to analyze scan results and suggest optimal exploits, effectively allowing them to identify flaws in victim systems faster.
“These advances accelerate many phases in the kill chain, particularly initial access,” ReliaQuest concludes.
CSO’s Lucian Constantin offers a deeper look at how generative AI tools are transforming the cyber threat landscape by democratizing vulnerability hunting for for pen-testers and attackers alike.
Escalating threats with alternative platforms
Cybercriminals are rapidly shifting from ChatGPT to new AI models from China — DeepSeek and Qwen — to generate malicious content.
“Threat actors are openly sharing techniques to jailbreak these models, bypass security controls, and create malware, info-stealers, and spam campaigns with minimal restrictions,” according to Check Point Research. “Some are even discussing how to use these AI tools to evade banking anti-fraud protections — a significant escalation in cyber threats.”
“Multiple discussions and shared techniques on using DeepSeek to bypass banking system anti-fraud protections have been found, indicating the potential for significant financial theft,” Check Point warns in a technical blog post.
China-based AI company DeepSeek, whose recent entry has sent shockwaves through the industry, is weakly protected against abuse compared to its Western counterparts.
Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”
Cybercriminals have also begun developing their own large language models (LLMs) — such as WormGPT, FraudGPT, DarkBERT, and others — built without the guardrails that constrain criminals’ misuse of mainstream gen AI platforms.
These platforms are commonly harnessed for applications such as phishing and malware generation.
Moreover, mainstream LLMs can also be customized for targeted use. Security researcher Chris Kubecka recently shared with CSO how her custom version of ChatGPT, called Zero Day GPT, helped her identify more than 20 zero-days in a matter of months.
Breaking in with authentication bypass
Gen AI tools can also be abused to bypass security defences such as CAPTCHAs or biometric authentication.
“AI can defeat CAPTCHA systems and analyze voice biometrics to compromise authentication,” according to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, layered security measures.”
Countermeasures
Collectively the misuse of GenAI tools is making it easier for less skilled cybercriminals to earn a dishonest living. Defending against the attack vector challenges security professionals to harness the power of artificial intelligence more effectively than attackers.
“Criminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,” Mindgard’s Garraghan says.
In a blog post, Lawrence Pingree, VP of technical marketing at Dispersive, outlines preemptive cyber defenses that security professionals can take to win what he describes as an “AI ARMS (Automation, Reconnaissance, and Misinformation) race” between attackers and defenders.
“Relying on traditional detection and response mechanisms is no longer sufficient,” Pingree warns.
Alongside employee education and awareness programs, enterprises should be using AI to detect and neutralize generative AI-based threats in real-time. Randomization and preemptive changes to IP addresses, system configurations, and so on, can act as an obstacle to attack.
Leveraging AI to simulate potential attack scenarios and predict adversary behavior through threat simulation and predictive intelligence also offers increased resilience against potential attacks.
SUBSCRIBE TO OUR NEWSLETTER
From our editors straight to your inbox
Get started by entering your email address below.