
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- New research shows that AI chatbots often distort news.
- 45% of the AI responses analyzed were found to be problematic.
- The authors warn of serious political and social consequences.
A new study conducted by the European Broadcasting Union (EBU) and the BBC has found that major AI chatbots routinely distort and misrepresent news stories. The organizations warn that this could result in a massive erosion of public trust in news organizations and the stability of democracy.
The study, spanning 18 countries and 14 languages, involved professional journalists who evaluated thousands of responses from ChatGPIT, Copilot, Gemini and Perplexity about recent news based on criteria such as accuracy, sourcing and distinguishing fact from opinion.
Also: This free Google AI course can change the way you research and write — but act fast
The researchers found that nearly half (45%) of all responses generated by the four AI systems “had at least one significant issue,” according to bbcWhile many (20%) “involved major accuracy issues,” such as hallucination – that is, fabricating information and presenting it as fact – or providing outdated information. Google’s Gemini fared the worst of all, with 76% of its responses having significant issues, especially with respect to sourcing.
intent
The study comes at a time when generic AI tools are encroaching on traditional search engines as the primary gateway to the Internet for many people – including, in some cases, the way they find and engage with news.
According to Reuters Institute digital news report 20257% of people surveyed globally said they now use AI tools to stay updated on news; This number increased to 15% for respondents under the age of 25. Pew Research Poll However, a survey of US adults conducted in August found that three-quarters of respondents never get their news from an AI chatbot.
Other recent data has shown that even though some people have full confidence in the information they receive from Google’s AI Overview feature (which Gemini uses), many of them rarely or never attempt to verify the accuracy of a response by clicking on the source link associated with it.
The EBU and the BBC have warned that the use of AI tools to engage with news, as well as the unreliability of the tools, could have serious social and political consequences.
The new study “shows conclusively that these failures are not isolated incidents,” Jean-Philippe de Tender, the EBU’s media director and deputy director-general, said in a statement. “They are systemic, cross-border and multilingual, and we believe this jeopardizes public trust. When people don’t know who to trust, they don’t trust anything, and this can impede democratic participation.”
video factor
The threat to public trust – the ability for the average person to conclusively distinguish fact from fiction – has been further exacerbated by the rise of video-generating AI tools like OpenAI’s Sora, which was released as a free app in September and was downloaded one million times in just five days.
Although OpenAI’s terms of use prohibit the depiction of any living person without their consent, users were quick to demonstrate that Sora could be induced to depict dead persons and other problematic AI-generated clips, such as battle scenes that never happened. (Videos generated by Sora come with a watermark that flies over the frame of the generated video, but some clever users have found ways to edit these out.)
Also: Is it risky to use Sora 2 and other AI video tools? This is what a legal scholar says
Video has long been considered the ultimate form of irrefutable proof in both social and legal circles that an event actually occurred, but tools like Sora are making that old model increasingly obsolete.
Even before the advent of AI-generated video or chatbots like ChatGPT and Gemini, the information ecosystem was already being fragmented and echo-chambered by social media algorithms that are designed to maximize user engagement, not to ensure that users receive a blindly accurate picture of reality. So Generative AI is adding fuel to the fire that has been burning for decades.
then and Now
Historically, staying updated on current events required a commitment of both time and money. People subscribed to newspapers or magazines and sat with them for minutes or hours to get news from their trusted journalists.
Also: I tried out the new Sora 2 for making AI videos – and the results were pure magic
The growing news-via-AI model has overcome both of those traditional barriers. Anyone with an Internet connection can now get free, quickly digestible summaries of news – even if, as new EBU-BBC research shows, those summaries are full of inaccuracies and other major problems.

