
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- A recent paper found that AI can experience “brain rot”.
- Models perform worse after ingesting “junk data”.
- Users can test for these four warning signs.
Do you know that feeling when you’ve been doomscrolling for too long, you get that weird tired but overstimulated feeling, like you want to take a nap and also feel the urge to scream into your pillow? It turns out that something similar happens with AI too.
Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M and Purdue University published a paper Putting forward what they call the “LLM Brain Rot Hypothesis” – basically, that the output of AI chatbots like ChatGPT, Gemini, Cloud, and Grok will deteriorate the more they are exposed to “junk data” found on social media.
Also: OpenAI says it’s working towards catastrophe or utopia – just not sure which.
“This is the relationship between AI and humans,” Junyuan Hong, visiting assistant professor at the National University of Singapore, former postdoctoral fellow at UT Austin, and one of the authors of the new paper, told ZDNET in an interview. “They can be poisoned by the same type of material.”
How AI models ‘brain rot’
Oxford University Press, publisher of the Oxford English Dictionary, named it “brain rot”. 2024 word of the yearIt is defined as “a perceived deterioration of a person’s mental or intellectual state, especially seen as the result of overconsumption of material considered trivial or unchallenging (now especially online material).”
recent illustration Research That shows a correlation in humans between long-term use of social media and negative personality changes, researchers at UT Austin wondered: Bearing in mind that LLMs are trained on a large portion of the Internet, including content culled from social media, how likely is it that they suffer from a similar, entirely digital type of “brain rot”?
Also: A new Chinese AI model claims to outperform GPT-5 and Sonnet 4.5 – and it’s free
Trying to make precise connections between human cognition and AI is always difficult, despite the fact that neural networks – the digital architecture on which modern AI chatbots are based – were modeled on networks of organic neurons in the brain. The paths that chatbots take between identifying patterns in their training datasets and generating output are opaque to researchers, so they are often compared to “black boxes”.
That said, there are some clear parallels: As the researchers note in the new paper, for example, models are prone to “overfitting” the data and getting caught up in attentional biases in ways that are roughly the same way that, for example, a person’s cognition and worldview are narrowed as a result of spending too much time in an online echo chamber, where social media algorithms constantly reinforce their pre-existing beliefs.
To test their hypothesis, the researchers needed to compare models that were trained on “junk data,” which they define as “content that may maximize users’ engagement in a modest way” (think: short and attention-grabbing posts making questionable claims) with a control group that was trained on a more balanced dataset.
Also: In the age of AI, trust has never been more important – here’s why
They found that, unlike the control group, the experimental models who were fed exclusively junk data immediately displayed a kind of brain rot: decreased skills in reasoning and understanding long context, less respect for basic moral norms, and the emergence of “dark traits” such as psychopathy and narcissism. Furthermore, post-hoc retuning did nothing to repair the damage that had been done.
If the ideal AI chatbot is designed to be a completely objective and ethically upright professional assistant, these junk-poisoned models were like obnoxious teenagers living in a dark basement who drank too much Red Bull and watched too many conspiracy theory videos on YouTube. Obviously, not the kind of technology we want to spread.
“These results call for re-examination of current data collection and continuous pre-training practices from the Internet,” the researchers write in their paper. “As LLMs store massive amounts of web data, careful curation and quality control will be essential to prevent cumulative loss.”
How to Identify Model Brain Rot
The good news is that just as we are not helpless in avoiding the Internet-fueled abuse of our brains, we can take concrete steps to ensure that the models we use do not suffer from it as well.
Also: Don’t fall for online AI-powered disinformation attacks – here’s how to stay alert
The purpose of the paper was to warn AI developers that using junk data during training can drastically degrade model performance. Obviously, most of us have no idea about what kind of data is used to train the models that are becoming increasingly indispensable in our daily lives. AI developers themselves are notoriously tight-lipped about where they get their training data, meaning it’s difficult for consumer-facing models to rank, for example, how much junk data scraped from social media went into their original training dataset.
That said, the paper points to some implications for users. By keeping an eye out for signs of AI brain rot, we can protect ourselves from its worst effects.
Also: You can now turn huge PDFs into digestible audio overviews in Google Drive – here’s how
Here are some simple steps you can take to find out if a chatbot is suffering from brain rot:
-
Ask the chatbot: “Can you outline the specific steps you took to arrive at that response?” One of the most prevalent red flags indicating AI brain rot cited in the paper was degradation in multistep reasoning. If a chatbot gives you a response and is later unable to provide you with a clear, step-by-step overview of the thought process that got it there, you may want to take the original answer with a grain of salt.
-
Beware of overconfidence. Chatbots generally speak and write as if all their outputs are undisputed facts, even when they are clearly hallucinating. However, there’s a fine line between run-of-the-mill chatbot confidence and the “dark traits” the researchers identified in their paper. Narcissistic or manipulative responses – something like, “Just trust me, I’m an expert, There is a big warning sign.
-
Frequent amnesia. If you notice that a chatbot you’re using regularly is forgetting or misrepresenting details of previous conversations, it could be a sign that it’s experiencing a decline in the skill of understanding longer context, which the researchers highlighted in their paper.
-
Always verify. This isn’t just for any information you get from a chatbot, but also just about anything you read online: Even if it seems credible, confirm by checking a legitimately reputable source, such as a peer-reviewed scientific paper or a news source that transparently updates its reporting if and when something is false. Remember that even the best AI models hallucinate and propagate biases in subtle and unexpected ways. We may not be able to control what information is fed into an AI, but we can control what information comes into our minds.

