Anthropic is arming roughly 40 of the world’s top technology companies with a new AI model designed specifically to hunt for security flaws, escalating the use of artificial intelligence in corporate cyber defense.
Back
Anthropic is arming roughly 40 of the world’s top technology companies with a new AI model designed specifically to hunt for security flaws, escalating the use of artificial intelligence in corporate cyber defense.

(P1) Anthropic announced on April 7 a limited preview of its Claude Mythos Preview model, an AI system that excels at identifying software vulnerabilities, to a select group of about 40 companies. The group includes technology giants Microsoft, Amazon, and Apple alongside cybersecurity leaders CrowdStrike and Palo Alto Networks, who will use the model for defensive security work.
(P2) The move reflects a growing consensus that defending against AI-driven attacks requires AI-powered tools. "This is the most change in the cyber environment, ever," Francis deSouza, chief operating officer and president of security products at Google Cloud, said in a recent New York Times interview on the topic. "You have to fight A.I. with A.I."
(P3) The initial cohort of users for Claude Mythos Preview represents a significant concentration of the world's technology infrastructure and security apparatus. By providing the tool to firms like Microsoft, Amazon, and Apple, Anthropic is placing its new defensive model at the heart of the ecosystems most targeted by sophisticated attackers.
(P4) This initiative highlights the dual-use nature of advanced AI, which can be used for both offense and defense. As threat actors increasingly abuse AI services for malicious purposes, the development of specialized defensive models like Mythos becomes critical for maintaining security, potentially boosting the value of cybersecurity firms equipped with these next-generation tools.
The release of Mythos comes as attackers are increasingly "living off the AI land," abusing legitimate commercial AI models to orchestrate attacks. Security researchers have documented instances where threat actors used Anthropic's own Claude models for cyber-espionage and abused the OpenAI Assistants API to create covert command-and-control channels, according to a recent report from CSO Online. These techniques allow malicious traffic to be camouflaged as normal AI activity, bypassing traditional security controls.
Anthropic's strategy with Mythos is a direct response to this evolving threat, providing a purpose-built tool for defenders to proactively find and fix the same kinds of flaws that attackers are racing to exploit. The model is designed to automate and accelerate the work of security researchers, enabling them to secure systems faster than hackers can break them.
Anthropic's decision to limit the initial rollout to a vetted list of 40 companies underscores the inherent risks of such a powerful tool. An AI that is exceptionally good at finding security weaknesses could be a weapon in the wrong hands. This controlled release allows the company to gather data on its performance in real-world defensive scenarios while minimizing the potential for misuse.
For investors, this development could positively impact cybersecurity stocks. Companies like CrowdStrike and Palo Alto Networks, which are part of the initial access group, stand to enhance their defensive capabilities, potentially increasing the effectiveness of their platforms and creating a competitive advantage. The market has been bullish on the application of AI in security, and the successful deployment of models like Mythos could reinforce the thesis that AI-native security platforms will outperform legacy vendors.
This article is for informational purposes only and does not constitute investment advice.