Upholding ethical standards may come at a heavy price, but this time, tech giants have chosen to stand together. AI startup Anthropic was officially labeled a "supply-chain risk" by the Pentagon for refusing to allow the U.S. Department of Defense to use its technology for mass surveillance and autonomous weapons systems. This label is typically reserved for foreign companies. However, despite the U.S. military's strong stance, cloud providers Microsoft, Google, and Amazon have unusually voiced their support for Anthropic, assuring their enterprise customers that AI models like Claude will continue to provide services in non-military civilian projects, completely unaffected by the military ban.
The three major cloud computing giants reassured the public: civilian projects will continue as usual.
According toCNBCAccording to reports citing company spokespeople, the three tech giants reacted swiftly to calm market and developer anxieties after Anthropic's contract negotiations with the U.S. military completely broke down.
Microsoft: Following a review by its internal legal team, Microsoft issued a statement on Thursday confirming that it will continue to provide access to Anthropic through its cloud platform. This assurance applies to all Microsoft customers "except those of the U.S. Department of Defense".
Google and Amazon: Following Microsoft, Google also announced on Friday that its cloud services will continue to support AI tools such as Claude for "non-military projects." Amazon, one of Anthropic's largest investors, echoed this sentiment.
A "moral red line" that will not be crossed, even if it means facing sanctions.
The conflict was sparked by Anthropic's insistence on its guidelines for using AI models.
This AI startup, whose core philosophy is "security and alignment," has consistently demanded that "mass surveillance of U.S. citizens" and "autonomous weapon systems" be excluded from the military use of its products. However, the Pentagon has refused to compromise on these conditions and has issued an ultimatum to Anthropic.
With the ultimatum deadline expiring last Friday, Anthropic remained unyielding. In retaliation, the U.S. Department of Defense officially designated Anthropic's products as a "supply chain risk"—a highly damaging label historically reserved for foreign companies like Huawei deemed a national security threat to the United States. In contrast, media reports indicate that competitors, including OpenAI and Elon Musk's xAI, face significantly fewer constraints and reservations when collaborating with the military.
Business considerations and the ensuing legal battle
Currently, this ban only applies to the "U.S. Department of Defense" and does not extend to all U.S. federal government procurement projects. This explains why the three major cloud computing giants are so actively trying to reassure their customers.
In fact, countless enterprise and public institution projects are already built on Claude's infrastructure. If unilateral sanctions by the military cause customer panic and a large-scale migration of the platform, not only will Anthropic suffer a severe blow, but cloud infrastructure providers such as Microsoft, Google, and Amazon will also face huge revenue losses.
On the other hand, Anthropic clearly doesn't intend to swallow this blow silently; the company is prepared to pursue legal action against this "security risk" assessment.File a lawsuit with the U.S. government.



