❓WHAT HAPPENED: OpenAI is hiring a “head of preparedness” to address the challenges and dangers posed by artificial intelligence (AI) technologies, including a potential rogue AI.
👤WHO WAS INVOLVED: OpenAI, led by CEO Sam Altman, is behind the initiative, with the new role offering a salary of $555,000 plus equity.
📍WHEN & WHERE: The announcement was made recently on X (formerly Twitter).
💬KEY QUOTE: “This will be a stressful job,” said Sam Altman, emphasizing the stakes involved in addressing AI risks.
🎯IMPACT: The role aims to strengthen OpenAI’s safety measures and ensure its AI systems are used responsibly while mitigating potential abuses.
OpenAI announced it is seeking to fill a new position titled “head of preparedness” as part of its efforts to address the risks associated with artificial intelligence (AI), including a possible rogue AI. The role was revealed by OpenAI’s CEO, Sam Altman, who acknowledged the “real challenges” posed by the advanced technologies developed by the organization.
“This will be a stressful job,” Altman stated, highlighting the high stakes and complexity involved in managing the potential dangers of AI systems. He also highlighted concerns over AI’s impact on mental health and its potential to expose critical vulnerabilities in computer security systems.
In a post on X (formerly Twitter), Altman elaborated on the need for a nuanced understanding of AI capabilities. “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world,” he wrote. He further noted that while there is a strong foundation for measuring AI capabilities, much work remains to address the complexities and edge cases.
The new position will expand OpenAI’s existing safety measures, which the company claims include “increasingly complex safeguards.” According to the job listing, the role will focus on scaling safety standards alongside the development of more advanced AI systems. The job comes with a salary of $555,000 and equity in the company.
In May, The National Pulse reported that OpenAI’s former Chief Scientist, Ilya Sutskever, suggested constructing a bunker to prepare for the potential risks associated with artificial general intelligence (AGI), according to details shared by insiders familiar with the 2023 tumult at the top of the AI company. During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.”
Two other people who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”
Image by World Economic Forum / Benedikt von Loebell.
Join Pulse+ to comment below, and receive exclusive e-mail analyses.