
Follow ZDNET: Add us as a favorite source On Google.
Key takeaways of zdnet
- Openai Chatgpt is giving new security measures.
- A teenager recently used chat, to know how to take his life.
- Openai can further add the control of parents for young users.
When a user is in an emotional crisis, Chatgpt does not have a good track record of interfering, but many updates of Openai want to change it.
The company is building how its chatbot reacts to distressed users by strengthening safety measures, it updates how and what materials are blocked, expansion of intervention, emergency resources local, and bring a parents into conversation when needed, company. Announced This week. In the future, a parent can also see how their child is using a chatbot.
Also: Doctors trust AI medical advice on patients – even when it is wrong, it is still found.
People go to chatgate for everything including advice, but the chatbot may not be equipped to handle more sensitive questions asking some users. Openai CEO Sam Altman himself said that he would not trust AI for medicine, citing the concerns of secrecy; Recently a Stanford’s study expands how the chatbots lack important training, the human physician has to identify that when a person is threatened to himself or others, for example.
Teen suicide associated with chatbott
Those shortcomings can result in heart -wrenching results. In April, a teenage boy who spent hours in discussing methods with his suicide and chat Finally took his lifeHis parents are Filed a case What Chatgpt says against Openai “neither ended the session nor started any emergency protocol” despite showing awareness about the suicide state of the teenager. In a similar case, the AI Chatbot platform character. Sue being also being done By a mother whose teenage son committed suicide after getting entangled with a bot, who allegedly encouraged her.
Chatgpt has safety measures, but they work better in small exchanges. Openai wrote in the announcement, “As the back and forth grows back and forth, parts of the model’s safety training can be degraded.” Initially, the chatbot may direct a user to a suicide hotline, but over time, as the conversation wandering, the bot can offer an answer that leaves safe safety measures.
Also: Anthropic copyright violations agree to dispose of class action suits – what does it mean
“This is exactly the same type of breakdown that we are working to stop,” Openai writes, saying that its “top priority is ensuring that Chatgpt does not make a difficult moment worse.”
Security measures have increased for users
One way to do this is to continue the conversation as well as strengthen safety measures across the board to prevent the chatbot from encouraging or encouraging. Another is to ensure that inappropriate material is completely blocked – an issue that the company has faced with its chatbot in the past.
“We are tuned to those (blocked) thresholds, so the security must trigger,” the company writes. Openai is acting on a D-sizecase update for users in reality and prefer other mental conditions, including self-abuses as well as other types of crises.
Also: You should use Gemini’s new ‘secret’ chat mode – why and what does it do here
The company is making it easier to contact with emergency services or expert assistance for the bot when users express themselves with the intention of harming themselves. It has applied one-click access to emergency services and is connecting users to a certified physician. Openai stated that it is “discovering ways for people to make them easier to reach those,” that users may involve to establish a dialogue to designate emergency contacts and make interactions easier with loved ones.
“We will soon present the control of the parents who give parents -father an option to achieve more insight, and shape, how their teenagers use chats,” Openai said.
The recently released GPT-5 model of Openai improves several benchmarks, such as prevention of emotional dependence, reduction in sycophancy, and poor model reactions to mental health emergency conditions over 25%, the company said.
“GPT, 5 also creates a new security training method called a safe perfection, which teaches the model to be as helpful as possible while living within the safety limits. This may mean giving partial or high-level answers that can be unsafe,” this is said.