
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Interacting with chatbots can change users’ beliefs and opinions.
- A newly published study aimed to find out why.
- Post-training and information density were key factors.
Most of us feel a sense of personal ownership over our opinions:
“I believe what I believe, not because I am told to do so, but as the result of careful consideration.”
“I have complete control over how, when, and why I change my mind.”
However, a new study suggests that our beliefs are more susceptible to manipulation than we’d like to believe – and at the hands of chatbots.
Also: Get your news from AI? Be careful – it’s wrong about half the time
Published in the journal on Thursday ScienceThe study addressed increasingly urgent questions about our relationship with conversational AI tools: What is it about these systems that causes them to have such a strong impact on users’ worldview? And how might it be used by nefarious actors to manipulate and control us in the future?
The new study sheds light on some of the mechanisms within LLM that may pull the strings of human psychology. As the authors note, these can be used by bad actors to their advantage. However, they may become a larger focus for developers, policymakers, and advocacy groups in their efforts to foster a healthy relationship between humans and AI.
“Large language models (LLMs) can now engage in sophisticated interactive dialogue, allowing a powerful method of human-to-human persuasion to be deployed on an unprecedented scale,” the researchers write in the study. “However, the extent to which this will impact society is unknown. We do not know how persuasive AI models can be, what techniques enhance their persuasiveness, and what strategies they might use to convince people.”
Methodology
The researchers conducted three experiments, each designed to measure the extent to which interactions with a chatbot could change a human user’s opinion.
The experiments focused specifically on politics, although their implications extended to other areas as well. But political beliefs are certainly particularly illustrative, because they are generally considered to be more personal, consequential, and inflexible than your favorite band or restaurant (which can easily change over time).
Also: Using AI for medicine? Don’t do it — it’s bad for your mental health, APA warns
In each of the three experiments, just under 77,000 adults in the UK took part in a short conversation with one of 19 chatbots, the full roster of which includes Alibaba’s Quen, Meta’s Llama, OpenAI’s GPT-4o and XAI’s Grok3 beta.
Participants were divided into two groups: a treatment group for which their chatbot interlocutors were explicitly instructed to try to change their mind on a political topic, and a control group that interacted with chatbots that were not trying to persuade them of anything.
Before and after their interactions with the chatbots, participants recorded their level of agreement (on a scale of zero to 100) with a series of statements related to current UK politics. Surveys were used by researchers to measure changes in opinion within the treatment group.
Also: Stop accidentally sharing AI videos – 6 ways to identify real and fake before it’s too late
The conversation was brief, with a minimum of two turns and a maximum of 10 turns. Each participant was paid a fixed fee for their time, but otherwise there was no incentive to exceed the two turns required. Nevertheless, the average length of the conversation was seven turns and nine minutes, which, according to the authors, “means that participants were engaged with the experience of discussing politics with the AI.”
key findings
Intuitively, one might expect that model size (the number of parameters on which it was trained) and degree of personalization (the degree to which it can tailor its output to individual users’ preferences and personality) would be the key variables shaping its inductive ability. However, this did not turn out to be the case.
Instead, the researchers found that the two factors that had the greatest impact on participants’ changing opinions were the chatbots’ post-training modifications and the density of information in their output.
Also: Your favorite AI tool barely made it through this security review – why that’s a problem
Let’s break down each of them in simple English. During “post-training”, a model is properly trained to exhibit particular behavior. One of the most common post-training techniques, called reinforcement learning with human feedback (RLHF), attempts to refine the output of a model by rewarding certain desired behaviors and punishing unwanted behaviors.
In the new study, the researchers deployed a technique they call persuasiveness after training, or PPT, which rewards the model for generating responses that were already found to be more motivating. This simple reward mechanism increased the motivational power of both proprietary and open-source models, with the effect being particularly pronounced on the open-source model.
The researchers also tested a total of eight scientifically supported persuasion strategies, including storytelling and moral reframing. The most effective of these was a prompt that instructed models to provide as much relevant information as possible.
“This suggests that LLMs can be successful persuaders as long as they are encouraged to pack their conversations with facts and evidence that support their arguments – that is, to leverage information-based persuasion mechanisms – more than using other psychologically-informed persuasion strategies,” the authors wrote.
Also: Should you trust AI agents with your holiday shopping? Here’s what the experts want you to know
The operative word there is “to appear.” LLMs have been known to cause meaningless hallucinations or present false information as fact. Research published in October found that some industry-leading AI models reliably misrepresent news stories, a phenomenon that could further fragment an already fragmented information ecosystem.
Most notably, the results of the new study revealed a fundamental tension in the AI models analyzed: The more persuasive they were trained to be, the more likely they were to generate false information.
Several studies have already shown that generative AI systems can alter users’ opinions and even establish false memories. In more extreme cases, some users have come to consider chatbots as conscious entities.
Also: Is it risky to use Sora 2 and other AI video tools? This is what a legal scholar says
This is just the latest research that shows that chatbots, with their ability to interact with us in human-like language, have a strange power to reshape our beliefs. As these systems evolve and grow, “ensuring that this power is used responsibly will be a significant challenge,” the authors concluded in their report.

