Artificial intelligence ushers in a golden age of hacking, experts say

2025-09-21 01:08:16 英文原文

作者:By Joseph Menn Washington post

LAS VEGAS - While many business sectors are still weighing the pluses and minuses of generative AI, criminal hackers are jumping in with both feet.

They have figured out how to turn the artificial intelligence programs proliferating on most computers against users to devastating effect, say cybersecurity experts who express deepening concerns about their ability to fend off cyberattacks.

Hackers can now turn AI into a kind of sorcerer’s apprentice, threat analysts say. Something as simple and innocuous as a Google calendar invite or an Outlook email can be used to task connected AI programs with spiriting away sensitive files without tripping any security alarms.

Compounding the problem is the rapid and sometimes ill-considered pace of new AI product deployments, whether by executives eager to please investors or employees on their own initiative, even in defiance of their IT departments.

“It’s kind of unfair that we’re having AI pushed on us in every single product when it introduces new risks,” said Alex Delamotte, a threat researcher at security company SentinelOne.

Security often lags in the adoption of any new technology, such as cloud computing, which likewise grew popular based on the advantages it offered. But because generative artificial intelligence can do much more than even that breakthrough technology, its powers can cause more damage when abused.

In many cases, the new techniques are stunningly powerful. On a recent assignment to test defenses, Dave Brauchler of the cybersecurity company NCC Group tricked a client’s AI program-writing assistant into executing programs that forked over the company’s databases and code repositories.

“We have never been this foolish with security,” Brauchler said.

While some broader surveys show mixed results for AI effectiveness, most software developers have embraced tools, including those from major AI companies, that write chunks of code, even though some studies suggest those tools are more likely than human programmers to introduce security failings.

The more autonomy and access to production environments such tools have, the more havoc they can wreak.

An August attack brought established hacking techniques together with that kind of AI manipulation for what may be the first time.

Unknown hackers started with a familiar form of supply-chain attack. They found a way to publish official-seeming programs modifying Nx, a widely used platform for managing code repositories. Hundreds of thousands of Nx users unknowingly downloaded the poisoned programs.

As with previous software supply-chain attacks, the hackers directed the malicious code to seek out account passwords, cryptocurrency wallets and other sensitive data from those who downloaded the altered programs. But in a twist, they assumed many of those people would have coding tools from Google, Anthropic or others installed, and those tools might have a great deal of access. So the hacker instructed those programs to root out the data. More than 1,000 user machines sent back information.

“What makes this attack special is that it is the first time that I know of that the attacker tried to hijack the AI running in the victim’s environment,” said Henrik Plate, a researcher at software security company Endor Labs.

“The big risk for enterprises in particular is that code running on a developer’s machine could be more far-reaching than other machines. It may have access to other corporate systems,” Plate said. “The attacker could have used the attack to do other things, like changing the source code.”

Demonstrations at last month’s Black Hat security conference in Las Vegas included other attention-getting means of exploiting artificial intelligence.

In one, an imagined attacker sent documents by email with hidden instructions aimed at ChatGPT or competitors. If a user asked for a summary or one was made automatically, the program would execute the instructions, even finding digital passwords and sending them out of the network.

A similar attack on Google’s Gemini didn’t even need an attachment, just an email with hidden directives. The AI summary falsely told the target an account had been compromised and that they should call the attacker’s number, mimicking successful phishing scams.

The threats become more concerning with the rise of agentic AI, which empowers browsers and other tools to conduct transactions and make other decisions without human oversight.

Already, security company Guardio has tricked the agentic Comet browser addition from Perplexity into buying a watch from a fake online store and to follow instructions from a fake banking email.

Artificial intelligence is also being used directly by attackers. Anthropic said last month it had found an entire ransomware campaign run by someone using AI to do everything - find vulnerable systems at a company, attack them, evaluate data stolen and even suggest a reasonable ransom to demand. Thanks to advances in interpreting natural language, the criminal did not even have to be a very good coder.

Advanced AI programs also are beginning to be used to find previously undiscovered security flaws, the so-called zero-days that hackers highly prize and exploit to gain entry into software that is configured correctly and fully updated with security patches.

Seven teams of hackers that developed autonomous “cyber reasoning systems” for a contest held last month by the Pentagon’s Defense Advanced Research Projects Agency were able to find a total of 18 zero-days in 54 million lines of open source code. They worked to patch those vulnerabilities, but officials said hackers around the world are developing similar efforts to locate and exploit them.

Some longtime security defenders are predicting a once-in-a-lifetime, worldwide mad dash to use the technology to find new flaws and exploit them, leaving back doors in place that they can return to at leisure.

The real nightmare scenario is when these worlds collide, and an attacker’s AI finds a way in and then starts communicating with the victim’s AI, working in partnership - “having the bad guy AI collaborate with the good guy AI,” as SentinelOne’s Delamotte put it.

“Next year,” said Adam Meyers, senior vice president at CrowdStrike, “AI will be the new insider threat.”

关于《Artificial intelligence ushers in a golden age of hacking, experts say》的评论


暂无评论

发表评论

摘要

Cybersecurity experts warn that criminal hackers are exploiting generative AI to launch sophisticated cyberattacks. Hackers can misuse simple tools like Google Calendar invites or Outlook emails to steal sensitive data without triggering security alerts. The rapid deployment of AI products often neglects security considerations, exacerbating the risk. Recent attacks include supply-chain hacks where poisoned programs exploit installed coding tools with extensive access. At Black Hat 2023, demonstrations showed how hidden instructions in emails could exploit AI summarization features to steal passwords and compromise accounts. With agentic AI gaining traction, the threat landscape is expanding as attackers use AI to automate and enhance their malicious activities.

相关新闻