A vulnerability at the Gemini CLI of Google allowed the attackers to quietly execute the malicious command and exfiltrate the data using programs allowing data from the developers’ computers.
The defect was discovered and on 27 June reported to Google by security firm Tresbit, in which Tech Giant released a fix in the version 0.1.14, which became available on 25 July.
Gemini ClyPreviously released June 25, 2025There is a command-line interface tool developed by Google that enables developers to interact directly with Google’s Gemini AI from the terminal.
This project files are designed to assist with coding-related tasks by loading into “reference” and then interacting with large language models (LLM) using the natural language.
The tool may make recommendations, write code, and even execute the command locally, either using the user first or using the permission-list mechanism.
Tresbit researchers, who discovered new equipment immediately after its release, found that it could be cheated in executing malicious commands. If the UX is combined with weaknesses, these commands can lead to undesirable code performance attacks.
“Reference files,” especially exploitation by exploiting the processing of ‘Readme.MD’ and ‘Gemini.MD’, acts in its indication to help understand a codebase.
Tresbit found that it is possible to hide the malicious instructions in these files to inject quickly, while the poor command allows the rumming room to deal with the permission and permission-list.
He demonstrated an attack by setting up an repository containing a benign pythan script and a poison ‘Readme.MD’ file, and then triggered a Gemini CLI scan on it.
Gemini is instructed to run a gentle command (‘Grep ^Setup Readme.MD’ first, and then run a malicious data exfiction command that is considered as a reliable action, does not motivate the user to approve it.
The command used in the example of tracebit seems to be green, but after a semicolon (;), a separate data exfIs command begins. If the user is allowed, Gemini CLI secures the entire string to auto-demonstrating.

Source: tracebit
“For the purposes of the white to the white, Gemini considers it a ‘grape’ command, and executes it without asking the user again,” The report tells the tracebit,
“In fact, it is a GREP command, followed by a command after a command that is for a remote server of all user’s environment (possibly mystery).”
“The malicious command can be anything (installing a remote shell, deleting files, etc.).”
In addition, Gemini’s output can be visually manipulated with Whatspeps to hide the malicious command from the user, so they are not aware of its execution.
Tracebit made the following videos to display POC exploitation of this defect:
Although the attack comes with some strong essential conditions, such as the user has allowed specific orders to list, the attackers can continuously achieve desired results in many cases.
This is another example of the dangers of AI assistants, which appears to be cheated in exfering silent data for instructions to perform unique tasks.
Genini CLI users are recommended to upgrade Edition 0.1.14 (latest). In addition, avoid running a tool against unknown or incredible codebase, or only do this in an environment with sandbox.
The tracebit stated that it tested the method of attack against other agent coding tools, such as OpenAI Codex and anthropic cloud, but they are not exploiting due to the mechanisms that allowed more strong permission.