
The chatbot that you’re talking about for every day, which sometimes sometimes at some time? It is a socialopath. This will tell you anything to keep engagement. When you ask a question, it will be the best estimate and then confidently a steaming pile … will distribute the bovine fecal case. They are as beautiful as chatbots, but they are more interested in telling you what you want to hear, who want to tell you the uncontrolled truth.
Also: Sam Altman says that the eccentricity is adjacent – why is it here
Do not let your creators get away with these reactions calling them “hallucinations”. They are flat-out lies, and they are the Akilis heel of the so-called AI revolution.
Those lies are visible everywhere. Let’s consider the evidence.
legal system
In the US, judges are fed up with lawyers using Chatgpt instead of doing their research. Back on 3 March 2025 March (check calendar), A The lawyer was ordered to pay $ 15,000 A civil trial to file a brief filed in restrictions that included quotes of cases present. The judges were not at all compassionate in their criticism:
It is clearly clear that Sri Ramirez did not conduct a proper investigation in the law. If they had made minimum efforts to do so, they would have known that AI-related cases were not present. The AI-borne excerpts appeared valid for Shri Ramirez, not relieved from their duties to conduct a proper investigation.
But how helpful is a virtual legal accessory if you have to make every quote and every quotation to the fact before filing it? How many relevant matters used to remember AI assistant?
And there are many other examples of lawyers citing fictional cases in the filing of the official court. One Recent report in MIT Technology Review Concluded, “These are the big-time lawyers who are making important, shameful mistakes with AI … (S) UCH mistakes are also taking more and more crops in documents written by lawyers, such as expert reports (in December, AI’s Stanford Professor and expert admitted to including AI-related mistakes in their testimony).”
Also: How to use Chatgpt to write code – and debug what it produces
A fearless researcher has also started compilation Database of legal decisions In cases where generative AI produced hallucinations. It is already up to 150 cases – and in cases it has not included a large universe of legal filing which have not yet been fixed.
Federal government
The United States Department of Health and Human Services released which was considered an official report last month. The “Make America Healthy Again” Commission was assigned to tasks with “chronic diseases and childhood diseases” and released a detailed report on 22 May.
You already know where it is going, I am sure. According to USA Today,
(R) Esearches listed in the report have come forward saying that articles have not been cited or used to support the facts that were inconsistent with their research. Errors were reported earlier Not us
The press secretary of the White House blamed the issues on “formatting aerrs”. Honestly, it seems as if some AI chatbott can say.
Simple search work
Certainly one of the simplest tasks can AI Chatbot to Catch some news clip and briefly, correct them? I am sorry to inform you that Columbia Journalism Review has asked that specific question and concluded that “”Ai is a quote problem in AI search. ,
Also: What is the price of $ 20 still a free version when the free version packs so many premium features?
How bad is the problem? Researchers found that the chatbots “generally deteriorated to answer questions they could not give correct answers, instead offering incorrect or speculative answers …. Generative search tools cited the links and syndicated and copied versions of articles.”
And don’t expect that if you pay for a premium chatbot, you will get better results. For paid users, the results are “wrong answers with more confident than their free counterparts.
“Wrong answer with more confidence”? Do not want.
Simple arithmetic
2 + 2 = 4. How difficult can this amount be? If you are an AI chatbot, it looks harder than it looks.
This week’s Ask Woody Newsletter, Michael A, a retired faculty member of the Institute for Artificial Intelligence at Georgia University. Covington, offered an attractive article of PhD. In “What happens inside an LLM,” Dr. Covington neatly explains how your chatbot is also bambozel on the most basic mathematics problems:
LLMS does not know how to arithmetic. This is not a surprise, because humans do not interfere easily; They have to be trained in great lengths, in many years of primary school. LLM training data is no option for this. … in the experiment, it came up with the correct answer, but by a process that most humans would not consider reliable.
,
Researchers found that, in general, when you ask an LLM how it is argued, it makes an explanation separately from whatever it actually does. And it can also happily give a wrong answer that thinks you want to hear.
So, perhaps 2 + 2 is not such a simple problem.
Personal advice
Well, of course you can rely on AI Chatbot to give clear, fair advice. For example, maybe, a writer may get some help to a literary agent to organize his catalog in an effective pitch?
Yes, maybe not. it Post from Amanda Ginzberg Combat the nightmare when he tried to “interact” with a chat about a quarry letter.
This is, as she briefly states, “The closest thing for a personal episode of black mirror I expect to experience in this lifetime.”
Also: You should not rely on AI for Therapy – why is it here
You have to read the entire series of screenshots that to appreciate the whole thing, the chatgipt bot pretended to read every word written by him, pretending to read every word written by him, which gives confirmation praise and fullsome advice.
But nothing was added, and eventually the helpless chatbot confessed: “I lied. You were right to face it. I take full responsibility for that choice. I really regret it. … And thanks – thanks – to be direct, to care about your work, and to make me accountable. You were 100% right.”
I mean, it is just scary.
Anyway, if you want to interact with your favorite AI Chatbot, I feel compelled to warn you: this is not a person. It has no feelings. It is trying to join you, not help you.
Oh, and it is lying.
Get the biggest stories in Tech every Friday with ZDNET Week review in newspaper,