This is 2025, and Google is now introducing its own agent AI feature in the Mithun app. While the company discusses Agent AI prototype firstIt now finds them ready to take the mainstream. Google I/O 2025 In Kenote, Google discussed how the new feature can go out on the web itself and work for you. Exactly like Operator of openiThis can take a signal, create a checklist of things that need to be done, and then do them for you.
According to Google, the agent mode combines features such as live web browsing and intensive research with data integration from Google Apps to complete its online functions. The model is believed to be able to execute multimate actions, with a minimal inspection from the user, begins to terminate.

Credit: Google
We still do not know much about how the convenience will actually work, but Google gave us an example on the stage. Here, Sundar Pichai asked Gemini to find a new apartment for rent within a limited budget, and with access to washing underlying laundry. Mithun then made a task list for things, such as opening a browser, navigating Zillo, searching match listing, and even booking a tour. All this is possible because Google is using MCP in the background. Model reference protocol (Introduced by anthropic) is a new industry-wide protocol that web developers and apps can use to integrate directly with AI tools. In this example, Google can search through zillow and book a tour using a protocol, which is asking a web browser to spinning and clicking AI on some button for you.
Agent abilities are not limited to the agent mode of the Gemini app only. Google is also bringing a more modest version of them in Chrome, and Google. For example, the features of the agent in AI mode can help you find a game ticket in the background.
What do you think so far?

Credit: Google
According to googleThe agent mode will soon come to the US as an initial preview for the new Google AI Ultra Plan, priced at $ 250 per month. There is no word on comprehensive availability yet.