The Trump administration is now using artificial intelligence to audit the US defense supply chain, identifying deep vulnerabilities tied to two decades of China’s targeted economic strategy.
Back
The Trump administration is now using artificial intelligence to audit the US defense supply chain, identifying deep vulnerabilities tied to two decades of China’s targeted economic strategy.

The Trump administration is using artificial intelligence to expose critical vulnerabilities inside the U.S. military supply chain, identifying what one expert calls the result of two decades of economic warfare by China. The AI-powered audit has flagged deep-seated risks tied to Chinese materials and suppliers within core weapons systems, forcing a policy reversal on AI oversight.
"There is a big, thick middle of manufacturing that China has targeted over the last 20 years," Brandon Daniels, CEO of the supply chain analytics firm Exiger, said in a recent interview. His firm's AI platform is being used to trace the origin of critical components for the U.S. military.
The analysis revealed that where the U.S. once had over 360 manufacturers for key defense components like iron and magnesium castings, that number has fallen below 120 in the past decade. This decline comes as AI models from companies like Anthropic and Palantir are being used by the Pentagon to sift through vast amounts of data for both logistical analysis and battlefield targeting.
The push to secure the supply chain highlights a sharp tension for the administration: harnessing AI for national security while simultaneously trying to control its potential for catastrophic risk. This has led to a sudden embrace of government safety testing for advanced AI, a policy the administration previously dismissed as overregulation.
The administration’s change of heart came after Anthropic, a leading AI firm, announced it would not publicly release its latest model, dubbed Mythos. The company’s internal testing showed the model was so powerful at finding cybersecurity flaws that it could be weaponized by bad actors to compromise global computer systems. The incident reportedly spooked the White House into pursuing agreements with Google DeepMind, Microsoft, and xAI to allow government safety checks of their most advanced models.
The new policy gives a mandate to the rebranded Center for AI Standards and Innovation (CAISI) to evaluate frontier AI models for national security risks. However, critics question whether CAISI, with a budget of just $10 million, is adequately funded or equipped to evaluate secret, complex systems from firms with multi-billion dollar research budgets. Devin Lynch, a former White House cyber policy director, noted on LinkedIn that "Capability assessments are only as good as the threat models behind them," questioning what standards CAISI would use.
The debate over AI safety runs parallel to its rapid adoption on the battlefield. In the ongoing conflict with Iran, the U.S. military has used AI more than in any previous war, according to a recent CNN report. Software from contractors like Palantir sifts through satellite and signals intelligence to recommend targets to commanders, dramatically accelerating the "OODA loop" of observing, orienting, deciding, and acting.
Defense Secretary Pete Hegseth has insisted that "humans make decisions," but the speed of AI-assisted targeting raises new legal and ethical questions. The Pentagon is in a public dispute with Anthropic over the company's insistence on placing limits on how its technology can be used, with Hegseth criticizing the firm's stance. This conflict underscores the core challenge: the military wants to move as fast as possible with AI to maintain an advantage, while the creators of the most powerful AI systems are increasingly concerned about the consequences.
The situation creates a volatile environment for investors. Companies in the defense and technology sectors with high-risk supply chain ties to China could face significant headwinds. Conversely, firms like Exiger that provide AI-driven security and supply chain analysis may see a surge in demand. The administration's pivot suggests a new, more interventionist approach that could reshape regulation for the entire AI industry, affecting giants like Microsoft and Google as well as specialized defense contractors.
This article is for informational purposes only and does not constitute investment advice.