Google said Monday it has thwarted a cybercrime operation that used what the company believes is the first zero-day exploit developed with artificial intelligence, a milestone that security experts have warned would dramatically escalate the cyber threat landscape for enterprises.
"It’s here,” John Hultquist, chief analyst at Google's Mandiant threat intelligence arm, said in a report. “The era of AI-driven vulnerability and exploitation is already here. AI is going to be a huge advantage because they can move a lot faster.”
The exploit, implemented as a Python script, targeted a previously unknown vulnerability in a popular open-source web administration tool, allowing an attacker to bypass two-factor authentication (2FA). While the specific tool and threat actor were not named, Google said it had "high confidence" an AI model was used to create the exploit code.
The discovery marks a pivotal shift, moving the use of AI in cyberattacks from a theoretical danger to a confirmed reality. This development could force a broad re-evaluation of security postures across all industries, as AI models can drastically reduce the time and skill required to discover and weaponize software vulnerabilities.
AI Hallmarks in Malicious Code
Google’s confidence in the exploit’s AI origins stems from the code itself. The company noted the Python script contained an abundance of educational docstrings, a "hallucinated" CVSS score, and a structured, textbook format highly characteristic of the data used to train large language models (LLMs).
The vulnerability itself was a high-level semantic logic flaw resulting from a hard-coded trust assumption, a type of error that LLMs, trained on vast codebases, are particularly adept at identifying.
The report also highlighted that financially motivated cybercriminals are not alone in their use of AI. Google's intelligence teams have observed state-sponsored groups from China and North Korea actively using LLMs to enhance their offensive capabilities.
A China-linked actor, UNC2814, was observed using persona-driven jailbreaks—instructing an AI to act as a security expert—to aid its research into vulnerabilities in TP-Link firmware. A North Korean group tracked as APT45 allegedly sent thousands of repetitive prompts to recursively analyze known vulnerabilities and validate proof-of-concept exploits, building a more robust arsenal than would be practical without AI assistance.
A New Cybersecurity Arms Race
The incident underscores the dual-use nature of advanced AI. While threat actors are using it to accelerate attacks, the same technology is being positioned as a critical defensive tool. Companies like Anthropic and OpenAI are developing specialized AI models intended to help defenders find and patch vulnerabilities in their own systems.
This creates a new front in the cybersecurity arms race, pitting AI-powered attackers against AI-powered defenders. For now, experts predict a "transitional period" where cybersecurity risks rise significantly as companies scramble to adapt to a world where their software can be probed for weaknesses at an unprecedented scale and speed.
This article is for informational purposes only and does not constitute investment advice.