Resignations Shake OpenAI’s Superalignment Team Amid Resource Disputes
OpenAI’s Superalignment team, tasked with governing and steering superintelligent AI systems, was promised 20% of the company’s compute resources. However, according to an insider, their requests for even a fraction of that compute were frequently denied, impeding their progress and ability to perform essential work.
This resource allocation issue contributed to a wave of resignations within the team, including co-lead Jan Leike, a former DeepMind researcher who played a significant role in the development of ChatGPT, GPT-4, and its predecessor, InstructGPT. Leike publicly shared his reasons for leaving on Friday morning, citing long-standing disagreements with OpenAI’s leadership over the company’s core priorities.
In a series of posts on X, Leike expressed his concerns about the company’s trajectory. “I believe much more of our bandwidth should be spent getting ready for the next generations of models, focusing on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics,” he wrote. Leike emphasized the difficulty of these problems and his belief that the company was not on the right path to address them effectively.
OpenAI did not immediately respond to requests for comment regarding the promised and allocated resources for the Superalignment team. Formed last July under the leadership of Leike and OpenAI co-founder Ilya Sutskever, who also resigned this week, the team aimed to solve the core technical challenges of controlling superintelligent AI within four years. Despite managing to publish significant safety research and distributing substantial grants to external researchers, the team struggled to secure the necessary upfront investments, as product launches increasingly dominated OpenAI leadership’s focus.