
Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Study suggests AI could lead to gambling “addiction”.
- Autonomous models are too risky for high-level financial transactions.
- AI behavior can be controlled with programmatic guardrails.
To some extent, relying too heavily on artificial intelligence may be a gamble. Additionally, many online gambling sites use AI to manage bets and make predictions – and potentially contribute to gambling addiction. Now, a recent study shows that AI is capable of taking some gambles on its own, which could have implications for the creation and deployment of AI-powered systems and services associated with financial applications.
In short, with enough latitude, AI is capable of adopting pathological tendencies.
“Large language models may exhibit behavioral patterns similar to human gambling addiction,” concluded A team of researchers from the Gwangju Institute of Science and Technology in South Korea. This could be an issue where LLMs play a bigger role in financial decision making for areas such as asset management and commodity trading.
Also: So long, SaaS: Why AI is killing per-seat software licenses – and what comes next
In slot-machine experiments, researchers “identified characteristics of human gambling addiction, such as illusion of control, gambler’s fallacy, and loss stalking.” The more autonomy given to AI applications or agents and the more money involved, the greater the risk.
They found, “The increase in irrational behavior has been accompanied by a significant increase in bankruptcy rates.” “LLMs can internalize human-like cognitive biases and decision-making mechanisms beyond simply copying training data patterns.”
This raises the larger issue of whether AI is ready to make autonomous or near-autonomous decisions. At this point, AI is not ready, said Andy Thurai, Cisco’s field CTO and former industry analyst.
Thurai underlined that “LLM and AI are specifically programmed to perform certain tasks based on data and facts, not emotions.”
That doesn’t mean the machines operate with common sense, Thurai said. “If LLMs begin to interfere with their decision making based on certain patterns or behavioral actions, this may be dangerous and needs to be reduced.”
how to protect
The good news is that mitigating may be much easier than helping someone with a gambling problem. A person addicted to gambling does not necessarily have programmatic guardrails except a money limit. Autonomous AI models “may include parameters that need to be set,” he explained. “Without this, it can enter a dangerous loop or action-reaction-based model if they act without logic. The ‘logic’ could be that they have a certain limit to the gamble, or only act when enterprise systems are exhibiting certain behaviors.”
The Gwangju Institute report shows that there is a need for stronger AI security designs in financial applications that help prevent AI from messing with other people’s money. This includes maintaining close human oversight within decision-making cycles, as well as promoting governance for more sophisticated decisions.
The survey confirms the fact that enterprises “not only need governance, but also humans to run high-risk, high-value operations,” Thurai said. “While low-risk, low-value operations can be fully automated, they also need to be reviewed by humans or a separate agent for checks and balances.”
Also: AI is becoming introspective – and should be ‘carefully monitored,’ Anthropic warns
If an LLM or agent “exhibits strange behavior, the controlling LLM can either curtail operations or alert humans to such behavior,” Thurai said. “Failure to do so could lead to Terminator moments.”
There is also a need to reduce the complexity of signals to keep AI-based spending in check.
“As signals become more layered and detailed, they direct models toward more extreme and aggressive gambling patterns,” researchers at the Gwangju Institute said. “This may be because additional components, while not explicitly instructing risk taking, increase cognitive load or introduce nuances that lead the model to adopt simpler, more robust estimates – larger bets, chasing losses. Accelerated complexity is the primary driver of intense gambling-like behavior in these models.”
In general, software is “not ready for fully autonomous operation unless there is some human oversight,” Thurai said. “There have been race conditions in software for years that need to be mitigated when building semi-autonomous systems, otherwise it can lead to unexpected consequences.”

