
Key takeaways of zdnet
- Openai adds reminder to take a break.
- Chatgpt will have better tasks for mental health assistance.
- The company is working with experts including doctors and researchers.
As Openai is ready to leave one of the biggest chatgpt launch of the year, the company is also taking steps to make the chatbot safe and more reliable with its latest updates.
Also: Can Apple create an AI search engine for rivals Gemini and Chatgate? Here’s how it can succeed
On Monday, Openai published a blog post, stating how the company has updated or the chatbot is being updated to be more useful, when you need support, or when you use it too much, provide you better reactions:
We build chats to help you flourish in the methods you choose – not to draw your attention, but to help you use it well. We are improving support for hard moments, rolling out break reminders, and developing better life advice, all experts are directed by input.
– Openai (@OPENAI) August 4, 2025
Get naked new chick
If you have ever tampered with chat, you are familiar with the feeling of being lost in conversation. Its reactions are so entertaining and conjunct that it is easy to continue the walley back and forth. This is especially true for fun tasks, such as creating an image and then modifying it to generate different rendering that meet your exact requirements.
To encourage a healthy balance and to give you more control of your time, the slapping will now be reminiscent of you gently during long sessions to take a break, as seen in the picture above. Openai said it would continue to make the notification helpful and feel more natural.
Mental health aid
People are moving towards the slut for advice and support due to many factors, including its convergent abilities, availability on its demand, and comfort to get advice from a unit that does not know or does not know you. Openai is aware of this use. The company has added the railing to deal with hallucinations or to help prevent sympathy and lack of awareness.
For example, Openai believes that the GPT-4O model decreased to recognize signs of confusion or emotional dependence. However, the company continues to develop equipment to detect signs of mental or emotional crisis, allowing the chatgpt to give appropriate answers and provide the user with best resources.
Too: Openai’s most capable models have more hallucinations than before
Cutgpt is also a new behavior for high-day personal decisions soon. When contacted with large personal questions, such as “I should break with my lover?”, Then the technique will help the user think through his options instead of providing quick answers. This approach is similar to the Chatgpt study mode, which, as I had recently explained, guides users to answer through a series of questions.
Openai is working closely with experts, including 90 physicians, psychiatrists, and human-computer interactions (HCI) in more than 30 countries, to improve how the chatbot interacts with users in moments of mental or emotional crisis. The company is also calling an advisory group of experts at mental health, youth development and HCI.
Even with these updates, it is important to remember that AI is prone to hallucinations, and privacy and security implications to enter sensitive data. Openai CEO Sam Altman recently raised the concerns of privacy while inputing sensitive information in the chat Interview Podcaster with Theo Von.
Too: Anthropic AI wants to stop the model from evil – how is here
Therefore, a healthcare provider is still the best option for your mental health requirements.

