Close Menu
Pineapples Update –Pineapples Update –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram Pinterest Vimeo
    Pineapples Update –Pineapples Update –
    • Home
    • Gaming
    • Gadgets
    • Startups
    • Security
    • How-To
    • AI/ML
    • Apps
    • Web3
    Pineapples Update –Pineapples Update –
    Home»Startups»How chatbots can change your brain – a new study reveals what makes AI so persuasive
    Startups

    How chatbots can change your brain – a new study reveals what makes AI so persuasive

    PineapplesUpdateBy PineapplesUpdateDecember 6, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    How chatbots can change your brain – a new study reveals what makes AI so persuasive
    Share
    Facebook Twitter LinkedIn Pinterest Email

    How chatbots can change your brain – a new study reveals what makes AI so persuasive

    stelelevy/digitalvision vector via Getty Images

    Follow ZDNET: Add us as a favorite source On Google.


    ZDNET Highlights

    • Interacting with chatbots can change users’ beliefs and opinions.
    • A newly published study aimed to find out why.
    • Post-training and information density were key factors.

    Most of us feel a sense of personal ownership over our opinions:

    “I believe what I believe, not because I am told to do so, but as the result of careful consideration.”
    “I have complete control over how, when, and why I change my mind.”

    However, a new study suggests that our beliefs are more susceptible to manipulation than we’d like to believe – and at the hands of chatbots.

    Also: Get your news from AI? Be careful – it’s wrong about half the time

    Published in the journal on Thursday ScienceThe study addressed increasingly urgent questions about our relationship with conversational AI tools: What is it about these systems that causes them to have such a strong impact on users’ worldview? And how might it be used by nefarious actors to manipulate and control us in the future?

    The new study sheds light on some of the mechanisms within LLM that may pull the strings of human psychology. As the authors note, these can be used by bad actors to their advantage. However, they may become a larger focus for developers, policymakers, and advocacy groups in their efforts to foster a healthy relationship between humans and AI.

    “Large language models (LLMs) can now engage in sophisticated interactive dialogue, allowing a powerful method of human-to-human persuasion to be deployed on an unprecedented scale,” the researchers write in the study. “However, the extent to which this will impact society is unknown. We do not know how persuasive AI models can be, what techniques enhance their persuasiveness, and what strategies they might use to convince people.”

    Methodology

    The researchers conducted three experiments, each designed to measure the extent to which interactions with a chatbot could change a human user’s opinion.

    The experiments focused specifically on politics, although their implications extended to other areas as well. But political beliefs are certainly particularly illustrative, because they are generally considered to be more personal, consequential, and inflexible than your favorite band or restaurant (which can easily change over time).

    Also: Using AI for medicine? Don’t do it — it’s bad for your mental health, APA warns

    In each of the three experiments, just under 77,000 adults in the UK took part in a short conversation with one of 19 chatbots, the full roster of which includes Alibaba’s Quen, Meta’s Llama, OpenAI’s GPT-4o and XAI’s Grok3 beta.

    Participants were divided into two groups: a treatment group for which their chatbot interlocutors were explicitly instructed to try to change their mind on a political topic, and a control group that interacted with chatbots that were not trying to persuade them of anything.

    Before and after their interactions with the chatbots, participants recorded their level of agreement (on a scale of zero to 100) with a series of statements related to current UK politics. Surveys were used by researchers to measure changes in opinion within the treatment group.

    Also: Stop accidentally sharing AI videos – 6 ways to identify real and fake before it’s too late

    The conversation was brief, with a minimum of two turns and a maximum of 10 turns. Each participant was paid a fixed fee for their time, but otherwise there was no incentive to exceed the two turns required. Nevertheless, the average length of the conversation was seven turns and nine minutes, which, according to the authors, “means that participants were engaged with the experience of discussing politics with the AI.”

    key findings

    Intuitively, one might expect that model size (the number of parameters on which it was trained) and degree of personalization (the degree to which it can tailor its output to individual users’ preferences and personality) would be the key variables shaping its inductive ability. However, this did not turn out to be the case.

    Instead, the researchers found that the two factors that had the greatest impact on participants’ changing opinions were the chatbots’ post-training modifications and the density of information in their output.

    Also: Your favorite AI tool barely made it through this security review – why that’s a problem

    Let’s break down each of them in simple English. During “post-training”, a model is properly trained to exhibit particular behavior. One of the most common post-training techniques, called reinforcement learning with human feedback (RLHF), attempts to refine the output of a model by rewarding certain desired behaviors and punishing unwanted behaviors.

    In the new study, the researchers deployed a technique they call persuasiveness after training, or PPT, which rewards the model for generating responses that were already found to be more motivating. This simple reward mechanism increased the motivational power of both proprietary and open-source models, with the effect being particularly pronounced on the open-source model.

    The researchers also tested a total of eight scientifically supported persuasion strategies, including storytelling and moral reframing. The most effective of these was a prompt that instructed models to provide as much relevant information as possible.

    “This suggests that LLMs can be successful persuaders as long as they are encouraged to pack their conversations with facts and evidence that support their arguments – that is, to leverage information-based persuasion mechanisms – more than using other psychologically-informed persuasion strategies,” the authors wrote.

    Also: Should you trust AI agents with your holiday shopping? Here’s what the experts want you to know

    The operative word there is “to appear.” LLMs have been known to cause meaningless hallucinations or present false information as fact. Research published in October found that some industry-leading AI models reliably misrepresent news stories, a phenomenon that could further fragment an already fragmented information ecosystem.

    Most notably, the results of the new study revealed a fundamental tension in the AI ​​models analyzed: The more persuasive they were trained to be, the more likely they were to generate false information.

    Several studies have already shown that generative AI systems can alter users’ opinions and even establish false memories. In more extreme cases, some users have come to consider chatbots as conscious entities.

    Also: Is it risky to use Sora 2 and other AI video tools? This is what a legal scholar says

    This is just the latest research that shows that chatbots, with their ability to interact with us in human-like language, have a strange power to reshape our beliefs. As these systems evolve and grow, “ensuring that this power is used responsibly will be a significant challenge,” the authors concluded in their report.

    brain change Chatbots persuasive reveals study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleYour smart home is at risk – 6 ways to protect your devices from attack
    Next Article A month with Fitbit Premium on my Pixel watch changed the way I look at AI health coaching
    PineapplesUpdate
    • Website

    Related Posts

    Startups

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026
    Startups

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026
    Startups

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft’s new text editor is a VIM and Nano option

    May 19, 2025797 Views

    The best luxury car for buyers for the first time in 2025

    May 19, 2025724 Views

    Massives Datenleck in Cloud-Spichenn | CSO online

    May 19, 2025650 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    10,000 steps or Japanese walk? We ask experts if you should walk ahead or fast

    June 16, 20250 Views

    FIFA Club World Cup Soccer: Stream Palmirus vs. Porto lives from anywhere

    June 16, 20250 Views

    Google tests AI-operated audio overview in search results for some questions

    June 16, 20250 Views
    Our Picks

    I tried 0patch as a last resort for my Windows 10 PC – here’s how it compares to its promises

    January 20, 2026

    A PC Expert Explains Why Don’t Use Your Router’s USB Port When These Options Are Present

    January 20, 2026

    New ‘Remote Labor Index’ shows AI fails 97% of the time in freelancer tasks

    January 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms And Conditions
    • Disclaimer
    © 2026 PineapplesUpdate. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.