Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

ChatGPT maker hiring ‘head of preparedness’ to deal with dangerous AI

‘This will be a stressful job’, warns OpenAI chief Sam Altman

OpenAI will introduce ads to ChatGPT, three years after the launch of the AI chatbot
OpenAI will introduce ads to ChatGPT, three years after the launch of the AI chatbot (Getty/iStock)

ChatGPT creator OpenAI is hiring a “head of preparedness” as it looks to deal with the dangers of artificial intelligence.

“This will be a stressful job,” said Sam Altman, the company’s chief executive, as he announced that the company was looking to deal with the “real challenges” posed by the technologies it has built.

OpenAI has sometimes been accused of inflating the power and danger of its technology as a way of promoting its tools and encouraging investment. But it has also been the subject of genuine concerns over the last year.

Those have included worries that vulnerable people are turning to AI systems such as ChatGPT to help in times of emotional crisis, and that the technology could in fact exacerbate those mental health troubles.

Mr Altman pointed to those concerns in his announcement of the new role. “The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote on X, formerly known as Twitter.

“We have a strong foundation of measuring growing capabilities, but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides both in our products and in the world, in a way that lets us all enjoy the tremendous benefits,” he wrote. “These questions are hard and there is little precedent; a lot of ideas that sound good have some real edge cases.”

OpenAI has already pointed to its work in preparedness, which has looked to ensure that the dangers of new AI models is limited by what it says are “increasingly complex safeguards”. The new job will “expand, strengthen, and guide this program so our safety standards scale with the capabilities of the systems we develop,” according to OpenAI’s ad.

The job will come with a salary of $555,000 as well as equity in OpenAI, according to the same ad.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in