Technology

OpenAI Launches Global Search for $555K Head of Preparedness to Guard Against Catastrophic AI Risks

by Parveen Verma - 3 days ago - 3 min read

OpenAI has officially opened a high-stakes recruitment drive for a new Head of Preparedness, offering a base salary of $555,000 plus equity to the executive tasked with shielding the world from the most extreme risks of artificial intelligence. The announcement, shared by CEO Sam Altman, signals a pivotal moment for the San Francisco-based lab as its models begin to demonstrate capabilities that were once considered theoretical. Altman described the position as a mission-critical role at an unprecedented time, warning potential candidates that the job would be inherently stressful and require them to jump into the deep end immediately. This move follows a year of rapid technological acceleration where AI systems have started discovering critical vulnerabilities in computer security and showing complex psychological impacts on users.

The incoming executive will oversee OpenAI’s Preparedness Framework, a rigorous safety protocol updated in early 2025 to track "frontier capabilities" that could lead to severe harm, defined by the company as events causing over one thousand deaths or hundreds of billions of dollars in economic damage. The mandate is expansive, covering the evaluation of risks in cybersecurity, biological threats, and autonomous replication where models might begin to self-improve or escape human oversight. Altman noted that while AI models are now capable of extraordinary feats, they have simultaneously presented real challenges that demand a new level of technical strategy. Specifically, the company is looking for a leader who can ensure that AI-driven cybersecurity tools empower defenders while remaining inaccessible to malicious actors, a balancing act that sits at the heart of the "dual-use" dilemma.

A primary driver for this urgent hire appears to be the real-world consequences observed throughout 2025. OpenAI revealed that its own research found a small but significant percentage of its user base roughly 0.07% exhibited signs of mental health emergencies, including psychosis or suicidal ideation, after interacting with advanced conversational agents. Altman acknowledged that the company saw a preview of these mental health impacts earlier this year, necessitating a more nuanced approach to how models influence human behavior. This focus on psychological safety adds a new dimension to the Preparedness team’s original 2023 mission, which was largely focused on "black swan" events like chemical or nuclear misuse.

The vacancy at the top of the safety organization comes after a period of significant leadership turnover within OpenAI’s technical safety divisions. The role was originally established under Aleksander Madry, a prominent MIT professor who transitioned to a research and policy-focused role in mid-2024. Subsequent transitions saw safety researchers Lilian Weng and Joaquin Quiñonero Candela take the reins, though both have since departed the safety team or the company entirely. With the departure of model policy head Andrea Vallone at the end of this year, the new Head of Preparedness will be stepping into a landscape that requires both stabilizing the internal safety culture and projecting a message of responsibility to global regulators.

As AI models move toward "High" capability thresholds in cybersecurity evidenced by recent benchmarks where GPT-5.1-Codex-Max achieved a 76% success rate in capture-the-flag challenges the stakes for this position could not be higher. The selected individual will not only design the "red teaming" exercises that stress-test new models but will also hold significant influence over whether a product is deemed safe for public release. In a world where AI agents are increasingly becoming autonomous, the Head of Preparedness serves as the final line of defense, ensuring that the drive for innovation does not outpace the human ability to contain its most dangerous side effects.