Pentagon formally designates Anthropic as supply-chain risk

Pentagon formally designates Anthropic as supply-chain risk

WASHINGTON

Pages from the Anthropic website and the company's logo are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison)

The Pentagon has formally notified Anthropic that the company and its state-of-the-art AI products have been designated a supply-chain risk, escalating a bitter dispute over AI safeguards.

It is the first time a U.S. company has received such a designation, which until now was reserved for firms from adversary countries, such as China's Huawei.

The formal designation will require defense vendors and contractors to certify that they do not use Anthropic's Claude models in their work with the Pentagon, which could prove to have wider consequences for the company.

The firm has vowed to challenge the designation in court, in what has become a rare public showdown between a major tech company and the U.S. government.

The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.

Washington hit back, saying the Pentagon operates within the law and that contracted suppliers cannot dictate terms on how their products are used.

The conflict took a turn on March 4 when The Information reported that Anthropic CEO Dario Amodei had told staff the actions against the company were politically motivated.

"The real reasons" the Trump administration "do not like us is that we haven't donated to Trump (while OpenAI/Greg have donated a lot)," Amodei said, referring to Greg Brockman, the president of ChatGPT-maker OpenAI, who has donated $25 million to Trump.

According to multiple U.S. media reports, the military used Anthropic's Claude AI model in its weekend attack on Iran and is still using it, despite a government-wide ban on the technology ordered last week.