A new generation of AI models is discovering software flaws at a rate that outpaces human patching ability, collapsing the window between bug discovery and weaponization to mere hours and creating systemic risk for critical infrastructure.
Back
A new generation of AI models is discovering software flaws at a rate that outpaces human patching ability, collapsing the window between bug discovery and weaponization to mere hours and creating systemic risk for critical infrastructure.

A class of artificial intelligence models led by Anthropic’s Mythos Preview is now discovering security vulnerabilities at a scale that threatens to overwhelm the technology industry’s defenses, highlighted by the model’s recent discovery of a bug that sat undetected in the OpenBSD operating system for 27 years. This capability marks a structural shift in cybersecurity, where the time from exploit discovery to weaponization is compressing from months to minutes.
"LLMs have now bypassed human capability for bug finding," said Alex Stamos, chief security officer at Corridor and former head of security at Facebook. The surge in high-quality, AI-found vulnerabilities follows the release of advanced models in late 2025, creating what some are calling a "bug armageddon" that challenges the entire software patching lifecycle.
The numbers quantify the pressure on defenders. Bug bounty platform HackerOne reports that submissions are up 76 percent from last year, while the average time to fix a vulnerability has ballooned from 160 to 230 days. Meanwhile, Palo Alto Networks, a partner in Anthropic’s defensive coalition, reports that the fastest AI-assisted attacks now move from initial access to data exfiltration in just 25 minutes, a timeline that legacy enterprise patching cycles, often measured in days or weeks, cannot match.
This growing asymmetry between offense and defense poses a direct risk to the foundational layers of the internet. Much of the world’s digital infrastructure, from operating systems to financial services, is built on open-source software maintained by small, often volunteer teams who now face an onslaught of AI-generated bug reports. The risk is that previously ignored or obscure software becomes a primary attack vector.
The experience of developers maintaining critical infrastructure illustrates the shift. Daniel Stenberg, lead developer of the 30-year-old cURL data transfer tool, saw bogus, AI-generated bug reports swamp his team in 2025. But by early 2026, the quality had flipped. Just three months into the year, his team fixed more legitimate vulnerabilities than in the entirety of the previous two years, largely due to higher-quality reports from AI-assisted researchers.
This acceleration is creating an unprecedented challenge. Sergej Epp, CISO at Sysdig, created a "Zero-Day Clock" to visualize the collapsing timeline. Eight years ago, the average time between a bug's public disclosure and an attack was 847 days. In 2025, it was 23 days. This year, most are exploited within a day. The Cloud Security Alliance warns that security organizations will likely be "overwhelmed by the need to apply patches and respond to AI-discovered vulnerabilities, exploits, and autonomous attacks."
In response, Anthropic has formed Project Glasswing, a coalition of around 50 technology companies including Microsoft, Google, Amazon Web Services, Cisco, and the Linux Foundation. The initiative provides these partners with access to the unreleased Mythos Preview model to find and fix flaws in their own systems before malicious actors can exploit them. Anthropic has stated it has no plans for a public release, citing the high potential for misuse.
"These maintainers are already overworked before AI," said Jim Zemlin, CEO of the Linux Foundation, which is experimenting with the model to help secure the Linux kernel. "This just makes their lives a lot better."
The move has not been without controversy, as the Pentagon labeled Anthropic a "supply chain risk" for asking the government not to use its technology for autonomous weapons, a decision Anthropic is disputing. Still, the formation of the coalition underscores the dual-use nature of the technology. While the models can be used for defense, the capabilities will inevitably proliferate. The most advanced open-weight models, which can be modified by anyone to remove safety guardrails, are estimated to be less than a year behind proprietary models like Mythos.
The development has sent a chill through the software industry, with shares of some cybersecurity companies falling on the news. The core risk for investors is that the value of any software company is partly dependent on its security posture. As AI dramatically lowers the cost of finding exploits, companies with significant legacy code or dependencies on under-resourced open-source projects face a repricing of their risk profile.
This article is for informational purposes only and does not constitute investment advice.