Just as the US Department of Defense and AI startup Anthropic were clashing over "red lines regarding model usage,"...Completely tearing apart the relationshipAt that time, OpenAI earlierAnnounceOpenAI has reached an agreement with the U.S. Department of Defense to deploy its advanced AI system in a classified military environment. Surprisingly, OpenAI emphasized that it has not removed the safety barriers of its models, and even explicitly outlined three red lines for their use. Meanwhile, OpenAI CEO Sam Altman publicly and strongly opposed the U.S. government's designation of Anthropic as a "supply chain risk."
OpenAI's "Three Red Lines" and its proprietary confidential architecture
Compared to Anthropic's previous two bottom lines (prohibiting large-scale domestic surveillance and prohibiting fully autonomous weapons), OpenAI's conditions to the Department of Defense this time are more specific, clearly drawing three inviolable red lines:
• Prohibited for use in large-scale domestic surveillance.
• Prohibited for use in fully autonomous weapon systems.
• Prohibited for use in high-risk automated decision-making systems (such as major discretionary systems similar to "social credit" mechanisms).
Anthropic's previous negotiations with the Department of Defense broke down because the military demanded a clause allowing "any lawful use" and prohibited the imposition of additional technical restrictions; Anthropic argued that this would cause the company to completely lose control over the actual use of the model.
So how did OpenAI manage to get the military to pay for it? The key lies in its "deployment architecture." OpenAI adopts a strategy of limited cloud deployment, maintaining a "safety stack" that is run and continuously updated by OpenAI itself, and assigning engineers with national security clearance to jointly supervise its operation. Through this dual safeguard of technology and contractual framework, OpenAI is able to retain substantial control over the use of its models while entering the national security system.
Sam Altman intervened to smooth things over: People shouldn't treat their own people as enemies.
OpenAI CEO Sam Altman later revealed more details on the social media platform X. He stated that OpenAI originally only intended to participate in non-classified projects and had even previously rejected the classified contract that Anthropic had accepted.
For a long time, we were planning to non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took.
We started talking with the DoW…
- Sam Altman (@sama) March 1, 2026
However, the situation took a sharp turn this week, prompting OpenAI to accelerate negotiations with the Department of Defense. Sam Altman pointed out that the military demonstrated great "flexibility" in the security architecture proposed by OpenAI. He emphasized that the initial intention of this hastily reached agreement was to "de-escalate the situation" and to establish a standard to ensure that all U.S. AI labs can provide services to the military under the same security conditions in the future.
More importantly, both OpenAI officials and Sam Altman himself strongly criticized the Department of Defense for labeling Anthropic as a "supply chain risk." They believe that labeling a top U.S. AI company with this kind of label, usually used against "hostile countries" (such as China and Russia), is detrimental to both the U.S. AI industry and national security.
The intriguing "War Department" and the realities of the battlefield
It's worth noting that in this controversy, both Anthropic and OpenAI deliberately used the historical term "Department of War" (DoW), predating its 1947 reorganization, in their official statements, rather than the currently used "Department of Defense" (DoD). This is interpreted by outsiders as a strong indication and emphasis that the deployment of these AI models involves "substantial military operations and lethal battlefield applications," rather than simply administrative defense matters.
And this is indeed the case. According to the Wall Street JournalReportJust hours after US President Trump announced a complete ban on Anthropic, the US military continued to use Anthropic's Claude AI technology in airstrikes against Iran. Multiple frontline units, including US Central Command, heavily rely on Claude for intelligence analysis, target identification, and battlefield scenario simulation.



