Following Anthropic's refusal to compromise with the U.S. Department of Defense (DoD), which led to its inclusion on the U.S. government's "supply chain risk" list, OpenAI's swift decision to accept a Pentagon contract has now triggered internal turmoil. Caitlin Kalinowski, head of OpenAI's robotics division, announced his resignation via social media, harshly criticizing the company for being too hasty in its collaboration with the military and failing to define clear "guardrails" beforehand. This personnel upheaval once again highlights the intense tug-of-war among AI companies between commercial interests, national security, and ethical boundaries.
Crossing the moral red line? "Fatal autonomy lacking human authorization"
Caitlin Kalinowski previously led hardware development such as AR glasses at Meta. In late 2024, he joined OpenAI as the head of the Robotics and Consumer Hardware division. However, this partnership came to an abrupt end due to OpenAI's recent military contract with the U.S. Department of Defense.
In a post on X confirming her resignation, Caitlin Kalinowski bluntly stated her concerns: "The bottom line of surveilling Americans without judicial oversight and granting systems fatal autonomy without human authorization deserves far more careful consideration than it does now."
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are…
— Caitlin Kalinowski (@kalinowski007) March 7, 2026
In response to other posts, she further explained that the collaboration with the U.S. Department of Defense was announced too hastily, lacked a clear definition of safety barriers, and emphasized that this was "first and foremost a governance issue." As the top executive responsible for bringing AI into the physical world (robotics hardware), her resistance to the potential direct application of AI to autonomous weapons systems was clearly stronger than that of the purely software departments.
To be clear, my issue is that the announcement was rushed without the guardrails defined. It's a governance concern first and foremost. These are too important for deals or announcements to be rushed.
— Caitlin Kalinowski (@kalinowski007) March 7, 2026
OpenAI officials quickly downplayed the issue, stating that they have drawn a red line: "No monitoring, no involvement with autonomous weapons."
Following the departure of a high-ranking executive, OpenAI officially confirmed Caitlin Kalinowski's resignation and issued a statement acknowledging the "strong opinions" people have about these issues and promising to continue discussions with relevant parties. However, the company also explicitly stated that it "does not support" the premise of the allegations made by Caitlin Kalinowski.
In its statement, OpenAI reiterated its position: "We believe that the agreement with the Pentagon creates a viable path for responsible and national security AI applications, while we have clearly drawn red lines: no domestic surveillance and no development of indigenous weapons."
Aftermath: A divergence from Anthropic's approach
Caitlin Kalinowski's resignation can be described as the most high-profile and disruptive internal backlash since OpenAI decided to sign a contract with the Department of Defense.
Shortly before this incident, OpenAI's competitor Anthropic was criticized for refusing to remove AI security barriers regarding "mass surveillance" and "the development of fully autonomous weapons."Blocked by the PentagonIn contrast to Anthropic's tough stance, OpenAI CEO Sam Altman, while promising to amend the contract with the Department of Defense to explicitly prohibit espionage against Americans, clearly failed to convince senior executives who value AI ethics within the company with this "sign first, fix later" approach.



