Anthropic, the AI firm blacklisted by the Pentagon, is now in talks with the White House, which needs access to the company’s powerful cyberweapon-and-shield model.
Back
Anthropic, the AI firm blacklisted by the Pentagon, is now in talks with the White House, which needs access to the company’s powerful cyberweapon-and-shield model.

Anthropic CEO Dario Amodei held "constructive" talks with White House Chief of Staff Susie Wiles on Friday, a significant step toward resolving a conflict over an AI model with unprecedented cybersecurity capabilities. The meeting addresses a government-wide anxiety over Mythos, a general-purpose AI from Anthropic that can find and execute exploits for thousands of unknown software vulnerabilities, posing both a national security risk and an indispensable defensive tool.
"We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place," Gregory Barbaccia, the Office of Management and Budget’s chief information officer, wrote in an email to federal agencies obtained by POLITICO.
Announced on April 7, Mythos is the first AI to complete a 32-step corporate network attack simulation from start to finish, succeeding in developing working exploits on its first attempt in over 83% of cases. Anthropic restricted the model from public release, instead providing it to roughly 40 vetted organizations, including Google, Amazon Web Services, and JPMorgan Chase, through its Project Glasswing program to find and fix critical vulnerabilities.
The talks highlight a paradox for the Trump administration, which designated Anthropic a national security supply-chain risk in February for refusing to remove safety guardrails for military use. Now, that same government needs the company's technology, with the Treasury Department, CISA, and the intelligence community all seeking access to Mythos to defend against the very flaws it can expose.
The dispute escalated after Defense Secretary Pete Hegseth demanded Anthropic grant the Pentagon unfettered access to its models for all lawful purposes, including autonomous weapons systems. Amodei’s refusal on safety grounds led to the company's blacklisting from government contracts. Anthropic filed two federal lawsuits in response, and while an appeals court upheld the Pentagon ban pending litigation, the company remains free to work with other federal agencies.
The capabilities of Mythos have made it a subject of urgent discussion across Washington. The UK’s AI Security Institute found it “substantially more capable at cyber offence than any model previously assessed.” JPMorgan Chase CEO Jamie Dimon noted it “reveals a lot more vulnerabilities.” Its power has forced the government’s hand, turning a confrontational stance into one of negotiation.
The Office of the National Cyber Director is leading the administration's response, and the Office of Management and Budget is exploring pathways for federal agencies to use a "modified" version of the model. This follows reports that several agencies, including the Treasury Department and CISA, are already testing the model or seeking access to evaluate its impact on the financial sector and critical infrastructure.
The standoff has global implications. The Bank of England’s governor named Mythos a systemic cybersecurity risk, and Anthropic is already providing access to select British banks as it quadruples its London office to 800 staff. This creates a dynamic where the UK, a key ally, may deploy a critical US-developed security tool before the US government itself can, adding pressure on the White House to find a resolution.
The situation gives Anthropic significant leverage. The company, which has seen its annualized revenue reach $30 billion and has fielded investor interest at an $800 billion valuation, does not depend on Pentagon contracts for survival. A deal would likely see the supply-chain risk designation withdrawn in exchange for providing Mythos access for defensive purposes, while maintaining restrictions on its use in autonomous weapons or surveillance. For investors in Anthropic, including Google and Amazon, and competitors like OpenAI, a resolution would signal a de-risking of the regulatory landscape for frontier AI development in the US.
This article is for informational purposes only and does not constitute investment advice.