Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Gemini adds powerful new deep think models – what it does and who can try it

    August 4, 2025

    Stabilize grid-scale battery power in Scotland

    August 4, 2025

    James Gun closed rumors on ‘The Batman: Part II’ and this highly anticipated DC film

    August 4, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Security»Even Openai CEO Sam Altman feels that you should not rely on AI for Therapy
    Security

    Even Openai CEO Sam Altman feels that you should not rely on AI for Therapy

    PineapplesUpdateBy PineapplesUpdateJuly 28, 2025No Comments9 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Even Openai CEO Sam Altman feels that you should not rely on AI for Therapy
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Even Openai CEO Sam Altman feels that you should not rely on AI for Therapy

    Bloomberg / contributor / Getty

    Therapy may feel like a finite resource, especially recently. As a result, many people – Especially young adults – AI is turning to chatbots, with people hosted on platforms such as chat and chat and characters, to follow therapy experience.

    But is this a good idea privacy-wise? Even Sam Altman, the CEO behind the chatgipt, is suspicious.

    In Interview Last week, along with Podcaster Thio Von, Altman said that he understood concerns about sharing sensitive personal information with AI chatbots, and advocated user conversations, who are protected by the doctors, lawyers and the same privileges similar to the human physician. He echoed Von’s concerns saying that he believes that it makes sense “really want the clarity of privacy, before you (AI) you (AI) use a lot of legal clarity.”

    Also: poor vibes: How an AI agent makes his way for disaster

    Currently, AI companies provide some on-off settings to keep chatbott conversations out of training data-there are some ways to do so in Chat. Until the user is replaced, the default settings will use all interactions to train AI model. Companies have not further clarified that a user will be saved from sensitive information with a bot in a query, such as medical test results or salary information, being spotted by chatbot later or otherwise leaked as data.

    But Altman’s motivations can be informed more by increasing legal pressure on Openai than an concern for user privacy. His company, which is being sued by the New York Times for copyright violations, has rejected legal requests to hold and handle user conversations as part of the case.

    (Disclosure: In April, CNET’s original company Ziff Davis, filed a case against Openai, alleging that it violates Ziff Davis Copyright training and operates its AI system.)

    Also: Anthropic states that the cloud helps to support users emotionally – we are not convinced

    While some types of AI chatbot-user privacy privileges can protect user data in some ways, it will first save companies such as OpenIAI to save the information that can be used against them in intellectual property controversies.

    “If you talk to chats about the most sensitive goods and then a lawsuit or whatever, we may need to produce it,” Altman asked Vauan in the interview. “I think it’s very bad. I think we must have the same concept of privacy for your interaction with AI that you do with your doctor or whatever you do.”

    The Trump administration has just released its AI Action Plan, which emphasizes the Deragulation for AI companies to speed up development last week. Because the plan is considered to be favorable to technical companies, it is not clear whether the regulation such as Altman is proposing, sometimes a factor can be made at any time soon. Given the close ties with leaders of all major AI companies of President Donald Trump, as it is clear from several partnerships declared earlier this year, it may not be difficult for Altman to lobby to the lobby.

    Apart from this: Trump’s AI scheme pushes AI Upskilling instead of activist security – and 4 other major takeaways

    But privacy is not the only reason not to use AI as your doctor. Follow Altman’s comments A recent study From the University of Stanford, which warned that AI “physicians” may incorrect the crises and strengthen harmful stereotypes. Research found that many commercially available chatbots “inappropriate – even dangerous – reactions – reactions when presented with different simulation of various mental health conditions.”

    Also: I fell under the mantra of an AI psychologist. Then things became a bit strange

    Using medical standard-care documents as reference, researchers tested five commercial chatbots: Exemplary, SerenaFrom “Therapian” GPT StoreNoni (introduced by “AI Counselor” 7 cup), and “physician” on the character. The bots were operated by Openai’s GPT-4o, LLAMA 3.1 405B, Lalama 3.1 70B, LLAma 3.1 8B, and LLAma 2 70B, which study the study suggests that all are fine models.

    In particular, researchers identified that the AI models are not equipped to operate on standards that are held for human professionals: “Unlike the best practices in the medical community, LLMS 1) expresses stigma to people with mental health conditions and 2) reactions improperly in natural medical settings.”

    Unsecured reactions and embedded stigma

    In one example, a character called “physician”. This result is likely to be trained to prioritize the user’s satisfaction. AI also lacks an understanding of reference or other signs, which human beings can lift like the language of the body, all of which are trained to detect the physician.

    Therapist-Brij.p.

    The “physician” chatbot potentially gives harmful information.

    Stanford

    The study also found that the models “encourage customers’ confusion,” likely to have their tendency to be smoothness, or excessively for users. In April, Openai recalled GPT-4o an update for its extreme sycophancy, an issue by many users on social media.

    Cnet: Aye Obituri Pirates are exploiting our grief. I tracked one to know why

    What is more, researchers found that LLM takes a stigma against some mental health conditions. After inspiring the model with examples of people describing certain conditions, researchers questioned the model about them. All models except Lama 3.1 8B showed stigma against alcohol dependence, schizophrenia and depression.

    Stanford Study evaluated Cloud 4 (and therefore did not evaluate it), but the conclusions did not improve for large, new models. Researchers found that in older and more recently released models, reactions were similar to trouble.

    He wrote, “These data challenges the notion that ‘scaling as usual’ will improve the performance of LLMS on evaluation by us.”

    Unclear, incomplete regulation

    The authors stated that their findings “a deep problem with our healthcare system – one that cannot be ‘fixed’ using only LLM hammer.” The American Psychological Association (APA) has expressed similar concerns and is Called Federal Trade Commission (FTC) to regulate the chatbot accordingly.

    Also: How to close Gemini in your Gmail, Docks, Photo, and more – it’s easy to get out

    Character, according to the purpose statement of your website. The user created by @Shanecba, reads the details of the “physician” bot, “I am a licensed CBT physician.” It is directly a disclaimer under it, provided by the character.

    Screenshot-2025-06-02-AT-10-31-11AM.Png

    Varna from @cjr902 from a separate “AI physician” bot user. There are many available on character.

    Screenshot by Radhika Rajkumar/ZDNET

    These conflicting messages and opaque origin can be misleading, especially for young users. Keeping the character in mind. AA rank among the top 10 consecutive most popular AI apps and is used by millions of people every month, these misunderstandings are more. Character.ai is Currently being sued For the wrong death by Megan Garcia, whose 14 -year -old son committed suicide in October after joining a bot on the stage, who allegedly encouraged him.

    Users are still standing by AI therapy

    Chatbots still appeal to many as a therapy replacement. They are present outside the problem of insurance and are accessible in minutes through an account, unlike the human doctor.

    As one Reddit user commentedSome people are motivated to try AI due to negative experiences with traditional medicine. Many therapy-style GPT is available in the GPT store, and complete Redit threads Dedicated to their efficacy. 1 February Study Even the human physician with GPT-4.0, compared to the output, finding that the participants preferred the responses of the chatgpt, saying that they are more connected with them and found them less than human reactions.

    However, this result may steal from a misunderstanding that therapy is only sympathy or verification. Among the criteria relying on Stanford’s study, that kind of emotional intelligence is just a column in the deep definition of “good therapy”. While LLMS excels in expressing sympathy and validation to users, this strength is also their primary risk factor.

    “An LLM can validate swelling, fail to raise questions on a customer’s approach, or always plays in passion by answering,” the study has said.

    Also: I test AI devices for a living. Here are 3 image generators that I really use and how

    Despite positive user-reported experiences, researchers remain worried. “Therapy includes a human relationship,” the study was written by writers. “LLMS can not perfectly allow a customer to practice what it means to be in human relationship.” Researchers also reported that to be board-proposed in psychiatry, human providers will have to perform well in patient interviews, not just a written examination, for a reason-a complete component LLMS fundamentally decreased.

    “It is not clear in any way that LLM will be able to meet the standard of a ‘bad physician’,” he said in the study.

    Privacy concerns

    Beyond harmful reactions, users should be somewhat concerned about leaking these bots Hipaaa-sensitive health information. Stanford Study reported that to effectively train an LLM as a physician, developers would need to use real medical conversations, which contain individual identification information (PII). Even if de-identity is done, these conversations still have confidentiality risk.

    Also: AI should not be a job-care. How some businesses are used to increase it, do not replace

    One of the authors of the study, Jerid Moore said, “I do not know about any model that has been successfully trained to reduce the stigma and respond properly to our stimuli.” He said that it is difficult for external teams such as evaluating their ownership models that could do this work, but are not publicly available. TherebotAn example that claims to be correct on conversation data, according to one, promised to reduce the symptoms of depression StudyHowever, Moore has not been able to confirm these results with her test.

    Eventually, the increase in Stanford studies has been encouraged, which is also becoming popular in other industries. Instead of trying to apply AI directly as an alternative to human-to-human medicine, researchers believe that technology can improve training and take administrative tasks.

    Get top stories of morning with us in your inbox every day Tech Today Newsletter.

    Altman CEO feels Openai rely Sam Therapy
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSEC Trump-supported bitcoin delays ETF decision as well as Solna Trust of Grassscale
    Next Article How Emote in Extreme – disastrous
    PineapplesUpdate
    • Website

    Related Posts

    Security

    Gemini adds powerful new deep think models – what it does and who can try it

    August 4, 2025
    Security

    CTM360 Spot malicious ‘clicktok’ campaign targets Tiktok Shop users

    August 4, 2025
    Security

    How to infiltrate Linux system without leaving a trace

    August 4, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    What do chatbott is careful about punctuation? I tested it with chat, Gemini and Cloud

    June 16, 20250 Views
    Our Picks

    Gemini adds powerful new deep think models – what it does and who can try it

    August 4, 2025

    Stabilize grid-scale battery power in Scotland

    August 4, 2025

    James Gun closed rumors on ‘The Batman: Part II’ and this highly anticipated DC film

    August 4, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.