You may not be familiar with the phrase “Peanut Butter Platform Plums”, but it clearly arises from a scientific experiment, where peanut butter was converted into a diamond-like structure, under very high pressure-hence “heels” reference.
Except this has never happened. The phrase is completely nonsense, but when asked by the author Meghan Wilson-Anastasios, a definition and backstory was given by Google AI overview This thread post (Which includes some other recreational examples).
The Internet picked it up and ran away with it. Apparently, “you can’t lick a beger twice” means that you can’t do anyone twice (Blue sky), “A loose dog will not surf” means that something is unlikely to happen (Wire), And “Cycle Eats FursFuturism,
Google, however, is not happy. I was eager to keep my own collection of fuck phrases and clear meanings together, but it seems that the trick is no longer possible: Google will now refuse to show an AI observation or tell you that you are wrong if you try and get clarification of a fruitless phrase.
If you go to a real AI chatbot, it is a bit different. I conducted some quick tests with Gemini, Cloud, and Chatgipt, and bots try to explain these phrases logically, while also say that they seem meaningless, and do not look in general use. This is a much more fine approach, which lacks AI interviews with reference.
Now, the AI overview is still labeled as “experimental”, but most people will not pay much attention to it. They will assume that the information they are looking at is accurate and reliable, is created on the information scrap from web articles.
And while Google’s engineers may have prepared for this special type of mistake, much like glue on pizza last year, it will probably not be long before another issue. It speaks for some basic problems with all our information from AI rather than references written by real humans.
What’s going on?
Fundamentally, these AI interviews are designed to provide answers and synthesize information, even if there are no accurate matches for your query-which this phrase-transfusion problem begins. The AI feature is also not reliable information on the Internet and not even the best judge.
Looking for a laptop problem fixing? First you will find a list of blue links from reddit and various support forums (and perhaps lifehacker), but with AI overview, Google sucks everything that can find on those links and tries to patch a smart answer together – even if no one has a specific problem you asked. Sometimes it can be helpful, and sometimes you can finish Make your problems worse,
What do you think so far?

Credit: Lifehacker
In fact, I have also seen that AI bots have a tendency to agree with signs, and confirm what a sign says, even if it is wrong. These models are eager to please, and want to be helpful, even if they cannot be. It depends on how you give words to your query, you can get AI to agree with something that is not right.
I did not manage to achieve any fruitless idioms defined by Google AI overview, but I asked AI why the second album of REM was recorded in London: This manufacturer who was below the boy’s choice, AI observation told me. But in fact, RAM’s second album was not recorded in London, it was recorded in Northern Carolina – this is the third LP that was recorded in London, and which was manufactured by Boyd.
The actual Gemini app gives correct response: that the second album was not recorded in London. But the way the AI overview tries to combine many online sources in a consistent whole, it seems suspicious in terms of its accuracy, especially if your discovery query makes some of its confident claims.

With the right encouragement, Google will get his music chronology wrong.
Credit: Lifehacker
“When people search fruitless or ‘wrong base’, our systems will try to find the most relevant results based on the limited web content,” Google said Android authority In an official statement. “This is true of overall discovery, and in some cases, the AI interview will also trigger in an attempt to provide useful references.”
We feel that there is barreling towards having a search engine that always reacts with AI instead of the information compiled by real people, but certainly AI has never fixed a tap, has tested an iPhone camera, or hearing REM – it is just a possibility of going to the last one.