Anthropic, an AI startup, was ordered to be completely shut down by US President Trump and the Department of Defense for refusing to allow its AI model, Claude, to be used for "mass surveillance within the United States" and "autonomous lethal weapons without human supervision." This move not only caused a huge uproar in Silicon Valley but also prompted hundreds of Google and OpenAI employees to launch a cross-company petition in support of the tech industry, calling on the tech community to unite against government pressure.
This landmark showdown, considered a clash between "AI ethics and national security," has recently escalated to its most intense form. For the first time, the US government has targeted its own leading AI companies, wielding the banner of "supply-chain risk," a term typically reserved for sanctioning hostile foreign powers.
The Trump administration took strong action: Claude was given six months to be completely removed from the list of eligible candidates.
According to foreign media reports, US President Trump issued a strong statement on February 27 via the social media platform Truth Social, criticizing Anthropic for attempting to force the Department of War (DOW) to comply with its terms of service rather than the US Constitution, and calling it a "catastrophic mistake." Trump formally ordered all federal government agencies (including the Department of Defense) to completely phase out and cease using Anthropic products within six months.
Following this, U.S. Defense Secretary Pete Hegseth announced that Anthropic was being designated a "supply chain risk posing a threat to national security." This ban took effect immediately, meaning that any contractor, supplier, or partner doing business with the U.S. military would be prohibited from engaging in any commercial activities with Anthropic.
Anthropic refused to compromise and threatened to take the matter to court.
Faced with extreme pressure from the White House and the Pentagon, Anthropic chose to respond forcefully.
Anthropic CEO Dario Amodei has previously drawn two inviolable red lines: AI will never be used for mass surveillance of the American people, and AI will never be integrated into autonomous weapons systems without human oversight.
In response to the government's ban, Anthropic issued an official statement expressing strong regret and noting that designating a U.S. company as a "supply chain risk" was unprecedented. A spokesperson emphasized, "No matter how much intimidation or punishment the Department of Defense imposes, it will not change our position on large-scale domestic surveillance or fully autonomous weapons. We will challenge this supply chain risk designation in court."
Silicon Valley United: "We will not be divided."
Anthropic is not alone in this political storm. More than 450 employees from Google and OpenAI (most of whom signed the letter under their real names) have jointly published an open letter entitled "We Will Not Be Divided".
The letter strongly urged Google and OpenAI executives to set aside their differences in business competition and stand with Anthropic to jointly reject the Department of Defense's overreaching AI application requests. The letter's author pointed out that the government is attempting to use fear to divide AI companies and force them to comply under pressure (xAI, led by Elon Musk, had earlier agreed to the Department of Defense's cooperation terms).
In an internal memo, OpenAI CEO Sam Altman assured employees that OpenAI would draw the same ethical red lines as Anthropic; he later told the media that the U.S. Department of Defense should not use this method to threaten American technology companies.



