Monday, February 23, 2026

ChatGPT Creator Seeks Safety Chief to Prepare for Potential Rogue AI.

PULSE POINTS

WHAT HAPPENED: OpenAI is hiring a “head of preparedness” to address the challenges and dangers posed by artificial intelligence (AI) technologies, including a potential rogue AI.

👤WHO WAS INVOLVED: OpenAI, led by CEO Sam Altman, is behind the initiative, with the new role offering a salary of $555,000 plus equity.

📍WHEN & WHERE: The announcement was made recently on X (formerly Twitter).

💬KEY QUOTE: “This will be a stressful job,” said Sam Altman, emphasizing the stakes involved in addressing AI risks.

🎯IMPACT: The role aims to strengthen OpenAI’s safety measures and ensure its AI systems are used responsibly while mitigating potential abuses.

IN FULL

OpenAI announced it is seeking to fill a new position titled “head of preparedness” as part of its efforts to address the risks associated with artificial intelligence (AI), including a possible rogue AI. The role was revealed by OpenAI’s CEO, Sam Altman, who acknowledged the “real challenges” posed by the advanced technologies developed by the organization.

“This will be a stressful job,” Altman stated, highlighting the high stakes and complexity involved in managing the potential dangers of AI systems. He also highlighted concerns over AI’s impact on mental health and its potential to expose critical vulnerabilities in computer security systems.

In a post on X (formerly Twitter), Altman elaborated on the need for a nuanced understanding of AI capabilities. “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world,” he wrote. He further noted that while there is a strong foundation for measuring AI capabilities, much work remains to address the complexities and edge cases.

The new position will expand OpenAI’s existing safety measures, which the company claims include “increasingly complex safeguards.” According to the job listing, the role will focus on scaling safety standards alongside the development of more advanced AI systems. The job comes with a salary of $555,000 and equity in the company.

In May, The National Pulse reported that OpenAI’s former Chief Scientist, Ilya Sutskever, suggested constructing a bunker to prepare for the potential risks associated with artificial general intelligence (AGI), according to details shared by insiders familiar with the 2023 tumult at the top of the AI company. During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.”

Two other people who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”

Image by World Economic Forum / Benedikt von Loebell.

Join Pulse+ to comment below, and receive exclusive e-mail analyses.

show less
show more

OpenAI Scientist Wants to Build Bunker Before Releasing Artificial General Intelligence.

PULSE POINTS:

What Happened: Former OpenAI Chief Scientist Ilya Sutskever reportedly discussed building a bunker in preparation for the release of artificial general intelligence (AGI).

👥 Who’s Involved: Ilya Sutskever, OpenAI leadership, CEO Sam Altman, and researchers within the company.

📍 Where & When: OpenAI, summer 2023, leading up to the November 2023 attempted ouster of Altman.

💬 Key Quote: “We’re definitely going to build a bunker before we release AGI,” Sutskever said during a meeting.

⚠️ Impact: Sutskever’s fixation on AGI and related concerns contributed to internal strife at OpenAI, culminating in his role in an unsuccessful coup against Altman, dubbed “The Blip.”

IN FULL:

OpenAI’s former Chief Scientist, Ilya Sutskever, reportedly suggested constructing a bunker to prepare for the potential risks associated with artificial general intelligence (AGI), according to new details shared by insiders familiar with the 2023 tumult at the top of the artificial intelligence (AI) company. The revelations, which emerged in interviews conducted by journalist Karen Hao, highlight Sutskever’s intense preoccupation with AGI and its implications.

During a summer 2023 meeting, Sutskever reportedly stated, “We’re definitely going to build a bunker before we release AGI.” Two other individuals who attended the meeting corroborated the account, with one describing Sutskever’s AGI beliefs as akin to anticipating a “rapture.”

AGI refers to a form of AI capable of grasping any intellectual tasks a human being can and carrying them out, possibly more effectively. Sutskever, who co-founded OpenAI, has long been vocal about his views on AGI, even claiming in 2022 that some AI models might be “slightly conscious.” His concerns about AGI’s development reportedly deepened by mid-2023, alongside growing dissatisfaction with OpenAI’s handling of the technology.

This unease played a role in Sutskever’s decision to join other board members in a failed attempt to oust CEO Sam Altman in November 2023. However, sources indicated that Sutskever’s resolve wavered as OpenAI employees rallied behind Altman. He later retracted his opposition to Altman’s leadership, though this effort to salvage his position ultimately proved unsuccessful.

The internal turmoil, referred to by OpenAI insiders as “The Blip,” underscores the divisions within the company over its direction and the risks of AGI. Despite Sutskever’s departure, the debate over AGI’s future and its potential consequences continues to loom large over OpenAI and the broader tech industry.

Recently, OpenAI announced it was partnering with a start-up founded by Jony Ive, famous for his work on Apple hardware, especially the design of the iPhone. While neither Ive nor Altman has revealed what sort of hardware product the partnership will produce, it is speculated that it will focus on “physical AI embodiments,” essentially moving the AI technology to other forms besides computers.

show less

PULSE POINTS:

show more

Artificial Superintelligence Expected by 2027.

A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030.

Ben Goertzel made the statement at this year’s Beneficial AGI Summit, cautioning that even as we approach AGI, there are still many unknowns about the technology’s capabilities and timeline. Goertzel founded SingularityNET and is known for his work on Sophia, the humanoid robot.

Goertzel also shared his view that once human-level AGI is reached, it could develop rapidly into an artificial superintelligence (ASI), an AI system with all the combined knowledge of human civilization. This scenario, often referred to as the ‘singularity,’ was previously considered a distant possibility. However, the recent advances in language model technology by OpenAI suggest that it may be closer than initially thought.

Goertzel acknowledged that his predictions are laden with uncertainties, as even powerful AI systems would not have a “human mind” in the conventional sense. Also, his theory assumes that AI technology would evolve in a linear and predictable manner, which does not factor in our world’s social, ethical, and ecological complexities.

American computer scientist and futurist Raymond Kurzweil recently predicted the ‘singularity’ wouldn’t occur until 2045. However, the continued advancement of AI remains a controversial topic. A recent study revealed that AI Language Learning Models (LLMs) tended to escalate conflicts to nuclear war when presented with wargaming scenarios.

show less
A leading expert in artificial intelligence (AI) has predicted that human-level or superhuman AI, also known as artificial general intelligence (AGI), could be achieved as early as 2027 rather than the previously predicted timeline of 2029 or 2030. show more