After a controversial 2025, OpenAI CEO Sam Altman posted a job ad on the X platform, announcing that the company is looking for a new "Person in charge of readiness (Head of Preparedness) The job offers very generous compensation, with a base salary of US$55.5 (approximately NT$1800 million) plus stock options. However, Sam Altman also bluntly warned that this will be a "stressful" job, and successful candidates must be prepared to "almost immediately jump into the deep end."
AI's "Real Challenges": From Science Fear to Real-World Lawsuits
Why is this position so important and high-pressure? Because 2025 is the year that the dangers of AI will transform from "theory" to "reality" for OpenAI.
Sam Altman acknowledged that OpenAI's models have begun to present real challenges, particularly the potential impact of ChatGPT on users' mental health, which the company has already "seen the foreshadowing" in 2025. It is understood that OpenAI has faced numerous allegations regarding users' mental health this year, and has even been involved in several wrongful death lawsuits.
The core task of the new leader will be to lead the technical strategy and implementation direction of OpenAI's "Preparedness framework." Simply put, it is to anticipate how the next generation of AI models might be misused or what devastating risks (whether to individual psychological well-being or social safety) might arise, and to develop relevant defense strategies.
We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
- Sam Altman (@sama) December 27, 2025
The "revolving door" of security teams: former leaders are all leaving.
This job is considered a "hot potato" because of the high turnover rate of the position over the past few years.
OpenAI's security team has experienced significant personnel changes in recent years:
• August 2024:Former head Aleksander Madry was reassigned.
• Successor's whereabouts:At that time, it was taken over by senior executives Joaquin Quinonero Candela and Lilian Weng.
• Subsequent developments:A few months later, Weng LiLeave the companyQuinonero Candela also announced his departure from the readiness team in July 2025 to take charge of recruitment.
Analysis: The Heavy Cost Behind High Salaries
OpenAI's high-profile recruitment drive, offering multi-million dollar annual salaries, reflects the predicament it faces in balancing "business development" and "social responsibility."
In the past few years, discussions about AI security have mostly focused on the science fiction-level existential risk of "whether AI will destroy humanity." However, a series of events in 2025 have proven that the "personal risks" brought by AI—such as inducing suicide, psychological dependence, and manipulation of misinformation—are the most pressing crises at present.
Sam Altman's description of the position as "jumping into the abyss" is perhaps not an exaggeration. The new head not only needs top-notch technical vision, but also sufficient political acumen and resilience to balance the tension between "accelerationists" and "security advocates" within the company, while dealing with the anger from regulatory agencies and the victims' families.


