OpenAI struck a deal for the Pentagon to use its models in the U.S. defense agency's classified network, with "safeguards," after President Donald Trump blacklisted AI rival Anthropic.
Trump had ordered the government to stop using Anthropic, calling it a threat to national security after it refused to agree to unconditional military use of its Claude models.
The firm vowed to sue over the "intimidation" in what has become a rare public dispute between a major tech firm and the US government, insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
Hours later on Feb. 27, OpenAI CEO Sam Altman announced a deal with the Pentagon to use its models with similar red lines to Anthropic, using "technical safeguards" that the Department of Defense had agreed to.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman wrote on X, adding that those principles went "into our agreement."
Washington had lashed out at Anthropic over its ethical concerns, saying the Pentagon operates within the law and contracted suppliers cannot set terms on how their products are employed.