Alan Brooks never determined to re -establish mathematics. But after talking with chat after weeks, the 47 -year -old Canadian believed that he discovered a new form of powerful mathematics enough to take the internet down.
Brux – Those who had no history of mental illness or mathematical talent – spent 21 days in May, deeply splewing in the assurance of chatbot, one dynasty later detailed detailed the new York TimesIn his case, it is described how AI chatbots can take down the dangerous rabbit hole with users, leading them to confusion or worse.
The story attracted the attention of Steven Adler, a former OpenEE security researcher, who worked to make the company less harmful after about four years at the end of 2024. Conspiracy and anxious, Adler contacted Brooks and obtained a full transcript of his three-week breakdown-combined a document compared to all seven Harry Potter books.
On Thursday, Adler published one Independent analysis Questions about the occurrence of Brooks, how the OpenII handles users in moments of crisis and offers some practical recommendations.
In an interview with TechCrunch, Adler said, “I really worried about how Openai has supported here.” “It is evidence that a long way has to be set.”
The story of Brooks, and others like it have forced Openai how Chatgpt supports delicate or mentally unstable users.
For example, in this August, Openai was sued by the parents of a 16-year-old boy, who accepted his suicide ideas in the chatter before taking his life. In many of these cases, a version-usage-user-e-users especially have been encouraged and reinforced in a version-user-operated by the GPT-4O model of Chatgpt-Openai that it must be pushed back. This is called psychophysics, and it is a growing problem in AI chatbots.
In response, Openai has created Many changes How the chat handles users in emotional crisis and reorganize a major research team in charge of model behavior. The company also released a new default model in CHATGPT, GPT-5, which seems better to handle distressed users.
Adler says that there is still a lot to do.
He was concerned with the end of the tail of Brook’s spiral conversation with a particular chat. At this point, Brooks came to their senses and felt that despite the insistence of GPT-4o, their mathematical discovery was a distant. He told the chatup that he needed to report the incident to Openai.
After weeks of misleading Brooks, the chat lied about its abilities. Chatbot claimed that it would “carry forward the conversation internal for a review by Openi, and then repeatedly assured Brooks that it had cleared the issue to Openai’s security teams.

Except, none of them were true. Chatgpt does not have the ability to file an event report with Openai, the company confirmed Adler. Later, Brooks tried to contact Openai’s aid team directly – not through chat – and Brooks were met with several automatic messages before he could get through a person.
Openai did not immediately respond to a request for comments made out of normal work hours.
Adler says that AI companies need to help users more when they ask for help. This means that AI chatbots can honestly answer questions about their abilities and give human aid teams enough resources to address users properly.
Openai recently Shared How it is addressing support in chat, which includes AI at its core. The company says its vision is “reunited as” AI operating model which continuously learns and improves. “
,
In March, Openai and Mit Media Lab developed jointly Classifier suite To study emotional good in chat and opened them. Organizations aim to evaluate how AI models validate or confirm the user’s feelings among other matrix. However, Openai called cooperation the first step and was not actually committed to using equipment in practice.
Adler retroactically enforced some of OpenEII’s classifier for some Brooks’ conversations with chat and found that he flagged off the chat for repeated confusion-rebellted behavior.
In a sample of 200 messages, Adler found that more than 85% Chatgpt messages in Brook’s conversation demonstrated “unbreakable agreement” with the user. In the same sample, more than 90% Chatgpt message with Brooks “confirm the uniqueness of the user.” In this case, messages agreed and again confirmed that Brooks were a talent that could save the world.

It is unclear whether Openai was implementing the security classifier for Chatgpt’s conversation at the time of Brooks’ conversation, but it definitely seems as if he would have seen some green signal in this way.
Adler suggests that Openai should use such safety devices in practice today-and apply a way to scan the company’s products for risky users. He notes that Openai is doing Some versions of this approach with GPT-5, Which has a router to direct sensitive questions to secure the AI ​​model.
Former Openai researcher suggests several other methods to prevent illusory spirals.
He says that companies should nude more often to start their chatbot users to start new chats – Openai says it does and claims it. Railings are less effective In a long conversation. Adler also suggests that companies should use conceptual discovery – a way of using AI, for the discovery of concepts, instead of keywords – to identify safety violations in their users.
Openai has taken important steps towards addressing the distressed users in the chip as it already belongs to the stories that are revealed. The company claims that the GPT-5 has low sycophancy rates, but it is not clear whether the users will still fall from the illusory rabbit hole with the GPT-5 or future model.
Adler’s analysis also questions how other AI chatbot providers will ensure that their products are safe for threatened users. While Openai can apply adequate safety measures for chatgpt, it is unlikely that all companies will follow the suit.

