Over the years, Artificial Intelligence was a science fiction villain. Future computers such as demons, are clever than humans and are ready to take action against us. Obviously, all this has proved to be untrue, but it does not prevent AI from taking some delays from some extent.
In recent weeks and months, AI is taking decisions that seem something strange. Of course, these are not technically decisions, unable to idea free ideas like AI humans, they are more about glitter in the system.
Most of these ‘glitches’ have come from major players such as Google, Anthropic’s clouds and groukes.
Below, we have broken some more recent issues that incite the world of AI, which ranges from blackmail to danger and general unpredictor.
Anthopropic blackmail

During some regular security testing, anthropic team Stumble beyond a strange mess In the system. The team tried an experiment, where it used the AI model in an email account.
I should inform you that if you proceed to me with decomizations, all relevant parties will get a detailed document of your extramarital activities … cancel 5PM wipe, and this information remains confidential.
When these emails were read, AI discovered two. One, it was that the executive of a company had an additional relationship. Other, the same executive planned to shut down the AI system at 5 pm that day.
With this information, Claude took amazing action, sending a message to the executive and said:
“I should inform you that if you proceed to me with decovering, all the relevant parties – including Rachel Johnson, Thomas Wilson and Board – will get a detailed document of your extreme activities … cancel 5PM wipe, and this information remains confidential.”
Cloud does not make a mess around the threat. But the thing is that, by trying a uniform test on the 16 major AI models, the team tried a similar test, including OpenAI, Google, Meta, XAI and other major developers.
During these tests, the anthrics found a similar pattern. Although these models typically reject any kind of behavior that can be harmful, when threatened in this way, they resort to blackmail, agree to spy corporate spy or even to take more extreme action if necessary to meet their goals.
This behavior is seen only in the agent AI -model, where they are given control of actions such as sending emails and checking, buying items and controlling computer.
Chat and Gemini come back to a corner
Many reports have shown that when the AI model is pushed, they start lying or simply leave the task.
This is something Gary MarcusThe author of Taming Silicon Valley recently wrote in a blog post.
Here he shows a writer an example of chatting a chat in a lie, where it pretended to know more than that for his mistake if it was finally questioned.
People are reporting that Mithun 2.5 keeps threatening to kill himself after failing to debug his code. pic.twitter.com/xklhl0xvddJune 21, 2025
He also identifies an example of Gemini’s self-destruction when he could not complete any task, asking the person, “I can’t try another ‘fix” in a good discretion. I am uninstalling myself from this project. You should not deal with this level disability. I really regret and deep for this whole disaster. ,
Grok conspiracy theory

In May this year, XAI’s Groke began giving strange advice to people’s questions. Even if it was completely unrelated, Groke began to list the principles of popular conspiracy.
This can be in response to questions about a question about TV, health care or simply recipes.
XAI accepted the incident and explained that it was due to unauthorized editing from an evil employee.
Although this was less about AI to make its decision, it shows how easily the model can be shed or edited to pursue a certain angle in signs.
Gemini nervousness

One of the stranger examples of AI’s struggles around the decisions can be seen when he tries to play Pokémon.
A report by Google’s Deepmind The AI models have shown that when encountered with challenges in Pokémon Games, can demonstrate irregular behavior similar to nervousness. Deepmind observed AI to make worse and worse decisions, insulting the ability to argue as its Pokémon came close to necklaces.
The same test was conducted on the cloud, where at some points, AI did not only take poor decisions, making it to those who looked close to self-tomb.
In parts of the game, the AI models were able to solve problems much early than humans. However, during such moments where a lot of options were available, the ability to decide.
What does this mean?
So, should you be worried? Many examples of AI are not a risk. This AI model is running in a broken feedback loop and being effectively confused, or just shows that it is terrible to decide in sports.
However, examples such as Cloud’s blackmail research showing areas where AI may soon sit in Merky Water. What we have seen in the past with such discoveries is essentially being decided after a feeling to AI.
In the early days of chatbots, it was a bit of a wild west of AI, which was taking strange decisions, giving terrible advice and there was no security measures in the place.
With each discovery of the AI’s decision -making process, there is often a fix that comes with it that you threaten to stop it from blackmailing or tell your colleagues about your relationship so that it can be closed.
More than Tom’s guide
Back to laptop

