In the battle between AI companies and national security agencies, upholding ethical standards seems to come at a great cost. According to reports from the Financial Times and Bloomberg News, AI startup Anthropic is currently trying to reach a new agreement with the U.S. Department of Defense (DoD) to avoid being labeled by the U.S. government as a "supply chain risk" equivalent to that of Chinese companies.
Anthropic CEO Dario Amodei has reportedly resumed negotiations with Emil Michael, Under Secretary of Defense for Research and Engineering. This controversy not only reveals the Pentagon's (the U.S. Department of Defense's) desire for AI surveillance capabilities but has also unexpectedly ignited a political war of words between Anthropic and OpenAI.
Crossing privacy red lines: Anthropic rejects "large-scale data analysis"
The trigger for this contract crisis was the difference in the two parties' understanding of the "boundaries of monitoring".
Anthropic signed a $2 million contract with the U.S. Department of Defense in 2025, but in subsequent negotiations, the two sides were unable to reach an agreement on the wording of the clauses to prevent AI technology from being used for "mass surveillance".
According to a memo sent to internal staff by Dario Amodei, the Department of Defense initially expressed its willingness to accept Anthropic's terms, but only on the condition that specific wording in the contract restricting "massive access to data analysis" be removed. Dario Amodei firmly stated that this was precisely the scenario they "most feared," and therefore categorically rejected the Pentagon's demands.
"Supply chain risk" threat and controversial Iranian airstrikes
Anthropic's refusal to compromise drew a severe counterattack from the US government.
The Department of Defense immediately threatened to cancel existing contracts and designated Anthropic as a "supply chain risk"—a harsh sanctions label typically used against Chinese technology companies. President Trump subsequently ordered government agencies to cease using Anthropic's technology.
Ironically, reports indicate that during the "six-month grace period" before the ban took effect, the US government allegedly used Anthropic's AI tools to plan and execute airstrikes against Iran.
The battle escalates: Anthropic CEO slams OpenAI and Sam Altman
This incident also caused a complete falling out between Anthropic and OpenAI, two AI companies with a long-standing relationship.
Shortly after Anthropic's falling out with the Department of Defense, OpenAI quickly announced an agreement with the Department of Defense. While OpenAI CEO Sam Altman defended Anthropic on social media, stating that it shouldn't be classified as a supply chain risk, he also added that if Anthropic had received the same contract as OpenAI, then "they should have signed it."
In an internal memo, Dario Amodei launched a scathing attack, stating that OpenAI's message was "a complete and utter lie." He also made no secret of his sarcasm, suggesting that Anthropic's current strained relationship with the government was partly due to his failure to offer Trump "dictator-style praise," as Sam Altman had done.
When OpenAI announced that it had won a contract with the Department of Defense, a large number of users concerned about privacy switched to Anthropic, pushing Claude to the top of the Apple App Store's free charts and unexpectedly defeating ChatGPT.
OpenAI's bottom line: No involvement in "military operational decisions".
In response to external criticism, Sam Altman later clarified on the X platform that OpenAI would amend the agreement to include a clause that "explicitly prohibits the use of its AI systems for mass surveillance of Americans."
However, OpenAI appears relatively open-minded about the use of AI technology in overseas military operations. According to CNBC, Sam Altman stated clearly at an all-hands meeting that the company "has no right to interfere in operational decisions." He told employees, "Maybe you think striking Iran is a good thing, and invading Venezuela is a bad thing, but that's not your place to comment."



