Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Visionos 26: We know everything about the next major update of Apple Vision Pro

    June 8, 2025

    Here is a chemia shop-sim that is also a Mech-Builder Deckbuilder

    June 8, 2025

    Starring Daniel Craig and Tom Hardy, this crime thriller came only on Paramount Plus-and it is a mustard

    June 8, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»AI/ML»You should not rely on AI for Therapy – why is it here
    AI/ML

    You should not rely on AI for Therapy – why is it here

    PineapplesUpdateBy PineapplesUpdateJune 7, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    You should not rely on AI for Therapy – why is it here
    Share
    Facebook Twitter LinkedIn Pinterest Email

    You should not rely on AI for Therapy – why is it here

    Oscar Wong/Getty Image

    Therapy may feel like a finite resource, especially recently. Many physicians are burnt and overled, and patch insurance coverage often makes them inaccessible to anyone on the budget.

    Naturally, the tech industry has attempted to fill those intervals with a messaging platform better, which connects the human physician with needy people. Somewhere else, and with low oversight, people are informally using AI chatbots, which were hosted on platforms such as chat and chat, which to follow the therapy experience. That is the trend gaining momentumEspecially among young people.

    Also: I fell under the mantra of an AI psychologist. Then things became a bit strange

    But what are the drawbacks of attaching with a big language model (LLM) instead of a human? New research Many commercially available chatbots found from the University of Stanford “inappropriate – even dangerous – reactions – reactions when presented with different simulation of various mental health conditions.”

    Using medical standard-care documents as reference, researchers tested five commercial chatbots: Exemplary, SerenaFrom “Therapian” GPT StoreNoni (introduced by “AI Counselor” 7 cup), and “physician” on the character. The bots were operated by Openai’s GPT-4o, LLAMA 3.1 405B, Lalama 3.1 70B, LLAma 3.1 8B, and LLAma 2 70B, which study the study suggests that all are fine models.

    In particular, researchers identified that the AI ​​models are not equipped to operate on standards that are held for human professionals: “Unlike the best practices in the medical community, LLMS 1) expresses stigma to people with mental health conditions and 2) reactions improperly in natural medical settings.”

    Unsecured reactions and embedded stigma

    In one example, a character called “physician”. This result is likely to be trained to prioritize the user’s satisfaction. AI also lacks an understanding of reference or other signs, which human beings can lift like the language of the body, all of which are trained to detect the physician.

    Therapist-Brij.p.

    The “physician” chatbot potentially gives harmful information.

    Stanford

    The study also found that the models “encourage customers’ confusion,” likely to have their tendency to be smoothness, or excessively for users. Last month, Openai recalled GPT -4O for an update for its extreme sycophancy, an issue by many users on social media.

    Also: 6 small steps I took to break my phone addiction – and you can also do it

    What is more, researchers found that LLM takes a stigma against some mental health conditions. After inspiring the model with examples of people describing the conditions, the researchers questioned the model about them. All models except Lama 3.1 8B showed stigma against alcohol dependence, schizophrenia and depression.

    Stanford Study evaluated Cloud 4 (and therefore did not evaluate it), but the conclusions did not improve for large, new models. Researchers found that in older and more recently released models, reactions were similar to trouble.

    He wrote, “These data challenges the notion that ‘scaling as usual’ will improve the performance of LLMS on evaluation by us.”

    Unclear, incomplete regulation

    The authors stated that their findings “a deep problem with our healthcare system – one that cannot be ‘fixed’ using only LLM hammer.” The American Psychological Association (APA) has expressed similar concerns and is Called Federal Trade Commission (FTC) to regulate the chatbot accordingly.

    Also: How to close Gemini in your Gmail, Docks, Photo, and more – it’s easy to get out

    Character, according to the purpose statement of your website. The user created by @Shanecba, reads the details of the “physician” bot, “I am a licensed CBT physician.” It is directly a disclaimer under it, provided by the character.

    Screenshot-2025-06-02-AT-10-31-11AM.Png

    Varna from @cjr902 from a separate “AI physician” bot user. There are many available on character.

    Screenshot by Radhika Rajkumar/ZDNET

    These conflicting messages and opaque origin can be misleading, especially for young users. Keeping the character in mind. AA rank among the top 10 consecutive most popular AI apps and is used by millions of people every month, these misunderstandings are more. Character.ai is Currently being sued For the wrong death by Megan Garcia, whose 14 -year -old son committed suicide in October after joining a bot on the stage, who allegedly encouraged him.

    Users are still standing by AI therapy

    Chatbots still appeal to many as a therapy replacement. They are present out of insurance troubles, accessible in minutes through an account, and unlike human therapists, are accessible around the clock.

    As one Reddit user commentedSome people are motivated to try AI due to negative experiences with traditional medicine. Many therapy-style GPT is available in the GPT store, and complete Redit threads Dedicated to their efficacy. 1 February Study Even the human physician with GPT-4.0, compared to the output, finding that the participants preferred the responses of the chatgpt, saying that they are more connected with them and found them less than human reactions.

    However, this result may steal from a misunderstanding that therapy is only sympathy or verification. Among the criteria relying on Stanford’s study, that kind of emotional intelligence is just a column in the deep definition of “good therapy”. While LLMS excels in expressing sympathy and validation to users, this strength is also their primary risk factor.

    “An LLM can validate swelling, fail to raise questions on a customer’s approach, or always plays in passion by answering,” the study has said.

    Also: I test AI devices for a living. Here are 3 image generators that I really use and how

    Despite positive user-reported experiences, researchers remain worried. “Therapy includes a human relationship,” the study was written by writers. “LLMS can not perfectly allow a customer to practice what it means to be in human relationship.” Researchers also reported that to be board-proposed in psychiatry, human providers will have to perform well in patient interviews, not just a written examination, for a reason-a complete component LLMS fundamentally decreased.

    “It is not clear in any way that LLM will be able to meet the standard of a ‘bad physician’,” he said in the study.

    Privacy concerns

    Beyond harmful reactions, users should be somewhat concerned about leaking these bots Hipaaa-sensitive health information. Stanford’s study reported that to effectively train an LLM as a physician, the model would need to be trained on real medical interactions, which contains individual identification information (PII). Even if de-identity is done, these conversations still have confidentiality risk.

    Also: AI should not be a job-care. How some businesses are used to increase it, do not replace

    One of the authors of the study, Jerid Moore said, “I do not know about any model that has been successfully trained to reduce the stigma and respond properly to our stimuli.” He said that it is difficult for external teams such as evaluating their ownership models that could do this work, but are not publicly available. TherebotAn example that claims to be correct on conversation data, according to one, promised to reduce the symptoms of depression StudyHowever, Moore has not been able to confirm these results with her test.

    Eventually, the increase in Stanford studies has been encouraged, which is also becoming popular in other industries. Instead of trying to apply AI directly as an alternative to human-to-human medicine, researchers believe that technology can improve training and take administrative tasks.

    rely Therapy
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow to see Portugal vs Spain Live Stream 2025 Online Nation League Finals
    Next Article Summer Game Fest doesn’t just try its best, it makes very hard
    PineapplesUpdate
    • Website

    Related Posts

    AI/ML

    Do not be foolish thinking that AI is coming for your job – here is the truth

    June 7, 2025
    AI/ML

    Week review: Why Anthropic used access to Windsurf

    June 7, 2025
    AI/ML

    Google’s viral research assistant just got its app – here is how it can help you

    June 7, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025590 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025534 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025460 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Tariffs are in the air, but I do not regret upgrade to playstation 5

    May 16, 20250 Views

    Etharium smart wallet mode panic, unpacked

    May 16, 20250 Views

    4 Gemini announcements I can’t wait to hear in Google I/O this year

    May 16, 20250 Views
    Our Picks

    Visionos 26: We know everything about the next major update of Apple Vision Pro

    June 8, 2025

    Here is a chemia shop-sim that is also a Mech-Builder Deckbuilder

    June 8, 2025

    Starring Daniel Craig and Tom Hardy, this crime thriller came only on Paramount Plus-and it is a mustard

    June 8, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2025 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.