Tired of screaming command in your AI assistant? Universal AI assistants of Google’s next generation can solve that problem for you, and much more.
AI may soon control your Android phone
Google has demonstrated its vision for a “universal AI auxiliary” that can understand the reference around you, come up with a solution, and take it out on its behalf. The goal is to create an all-look, all-hearing assistant that can automatically find out that when needed and without chimes you can be manually called without calling it.
This new assistant is called Project Estra, and Google has shown some very impressive performances in I/O 2025. In a demo, a user is having problems with the brake of his bike and asks Estra to find the bike user manual online.
Once Estra gets manual, it is asked to scroll until it finds the section that covers the break of the bike, and it is very innocent. The user then asks Estra to watch a video tutorial on YouTube and even to get information to contact the bike shop to get information about which parts they need. Estra is also able to call the closest bike shop from the user and asks whether the essential parts are in stock.
Ruckus Also report by looking at a demo, where Bibo XIU, a product manager of the Google Deepmind team, pointed out his phone camera in a pair of Sony headphones and asked Estra to identify them. Estra replied, saying that they are either Wh-1000xm4 or WH-1000xm3, an illusion I can bet that most humans will also have.
Once identified, XIU asked Estra to pull a manual and explained how to add them to their phone, only to disrupt AI assistant mid-north and asked him to pair the headphone for her. As you can probably guess, Estra is bound without any issue.
From the demo, it seems that the screen is imitating the screen input to move around the screen. Screen recording indicators also suggest that Estra reads your screen and decides where to go, navigating various user interfaces as it goes about its task.
A universal AI is on auxiliary horizon
Being impressive, these demos are not correct. They still require user input, and in the case of XIU demo, it was to manually enabling the facility that gives an estra access to his phone screen.
Currently, Project Estra is more than a test for Google’s Wildest AI ambitions. The well -performing characteristics here eventually fall down to devices such as Gemini and are made available to us. Google claims Its final vision is “Turning the Mithun app into a universal AI assistant that will demonstrate everyday tasks for us.”
Google is difficult to work, gradually phasing out older equipment in favor of new, AI-operated people. AI mode is replacing Google Search, and Gemini already has a list of impressive features that you should try.
That said, even today’s most advanced AI system needs you to enter the signals in each stage, provide them the required data and reference, and you may still need to take manual action. Since Estra can use the services of the Internet and Google, it wants to change all these inputs by reaching your information from various platforms and building the necessary references to take action.
It is not an easy goal to achieve, and I have not been started on privacy and safety issues, an universal AI accessory such as Estra can potentially lift the line down. Estra can do all heavy lifting locally using Gemini Nano model, but the demo does not show any indication of the case.
It is going to take some time to build such an assistant, but with these demos, Google has shown us a glimpse of the future. It may not come soon, but a universal AI is on the auxiliary horizon, and I am eagerly waiting for its arrival.