Google is upgrading its most capable Mithun AI model.
On Tuesday in Google I/O 2025, the company announced Deep Think, a “promoted” logic mode for its leading Gemini 2.5 Pro model. The Deep thinks the model to consider many answers to the questions before increasing its performance on some benchmarks, responding.
“(Deep think) pushes the performance of the model to its limit,” said Damis Hasabis, head of AI R&D Org, during a press briefing. “It uses our latest state -of -the -art research in thinking and logic, including parallel techniques.”
Google was unclear on the internal functioning of deep think, but the technology can be similar to OPNAI’s O1-PRO and the upcoming O3-PRO model, which uses an engine to find and synthesize the best solutions for a given problem.
Google says that Deep Think enabled Gemini 2.5 Pro to top a challenging coding assessment, top lavcodebench. Gemini 2.5 Pro Deep Think also defeated O3’s O3 on MMMU, a test for skills such as perception and argument.

Deep Think is available for “reliable examiners” through Gemini API as this week. Google said that it is taking extra time to evaluate safety before rolling out the deeper idea.
With deep thech, Google has introduced an update for its budget-oriented Gemini 2.5 flash model that allows the model to perform better on tasks associated with coding, multimodence, logic and long references. The new 2.5 flash, which is more efficient than the changing version, is available for preview in the company’s Gemini apps with AI Studio and Vertex AI platforms of Google.
Google says that better Gemini 2.5 flash will generally be available to developers in June.
Finally, Google is launching a model called Gemini Defusion, claiming that the company is “very fast”-4-5 times faster than the model is 4-5 times faster than the output and model performance twice in its shape. Gemini Prasar is available today for “reliable testers”.