Google wrapped her big kenot only in I/O 2025. As expected, it was filled with AI-related announcements, from Google’s image and update of video generation models to search and new features in Gmail.
But there were some surprises, such as an update for a new AI film production app and project starline. If you have not caught the event live, you can see everything missed in the roundup below.
Google has announced that it is rolling AI mode, a new tab that lets you all users of the US start finding the web using the company’s Gemini AI Chatbot.
Google will test new features in AI mode this summer, such as a way to generate charts for deep search and finance and sports questions. It is also rolling the ability to shop in AI mode in “coming months”.
Project Starline, which began as a 3D video chat booth, is taking a big step forward. It is becoming Google Beam and will soon launch with a light field display and six cameras inside an HP-branded device to create a 3D image of the person you are chatting on video calls.
Companies such as Deloite, Duoolingo and Celsforce have already stated that they will add HP’s Google Beam devices to their offices.
Google has announced the latest version of its AI Text-to-Image Generator Imagene 4, which the company says that is better in generating lessons and provides the ability to export images in more formats such as square and landscape. Its next-ji AI video generator, VO3, will allow you to generate video and sound together, while VO2 now comes with tools such as camera control and object removal.
Apart from updating its AI model, it is launching a new AI film production app called Google Flow. The tool uses VO, Imagene and Gemini to create an eight-second AI-re-video clip based on the text prompt and / or image. It also comes with visual-bilder tools, in which to sew the clip together and make AI video for a long time.
Gemini 2.5 Pro adds a “enhanced” logic mode
Experimental deep think mode is for complex questions related to mathematics and coding. It is capable of considering “many hypotheses” before responding and will only be available to the first reliable examiners.
Google has made its Gemini 2.5 flash model available to all on its Gemini app and is improving the cost-skilled model in Google AI Studio before a broad rollout.
The Xreal and Google Projects are forming teams on the aura, a new pair of smart glasses that uses Android XR platform for mixed-cosmic devices. We do not yet know much about glasses, but they will come with Gemini integration and a large field-off-verse, as well as built-in cameras and microphones.
Google is also partnered to make other Android XR smart glasses with Samsung, soft monster and Warbi Parker.
Project Estra can already use your phone’s camera to “see” your nearby items, but the latest prototype will allow it to complete the complete tasks on your behalf, even if you do not ask it clearly. The model can see what it is looking at, such as indicating a mistake on your homework.
Google is creating its AI assistant in Chrome. Starting from May 21, Google AI Pro and Ultra subscriber will be able to select the Genini button in Chrome, so that they can present the information in the webpage clearly or briefly and navigate the sites on their behalf. Google is planning to work Mithun in several tabs at a time at the end of this year.
Google is taking a new “AI Ultra” membership, which provides access to high use boundaries in the company’s most advanced AI model and apps such as Genini, Notebooklm, Flow, and more. Membership also includes early access to Mithun in Chrome and Project Meriner, which can now complete 10 tasks at once.
Talking about Project Estra, Google Search is launching Live, a feature that incorporates capabilities from AI auxiliary. By selecting the new “live” icon in AI mode or lens, you can talk forward and back with the search showing what is on your camera.
After creating the screenshining feature of Gemini Live for all Android users last month, Google announced that iOS users will be able to access it for free, as well as.
Google has revealed a new AI-managed tool stitch that can generate interfaces using selected themes and details. You can also include wireframes, rough sketches and screenshots of other UI designs to direct stitch’s output. The experiment is currently available on Google Labs.
Google Meet is starting a new feature that translates your speech in real time in the favorite language of your conversation partner. This feature only supports English and Spanish. It is rolling out in beta for Google AI Pro and Ultra customers.
Gmail’s smart answer feature, which uses AI to respond to your email, will now use information from your inbox and Google drive, which promotes more sound reactions like you. This feature will also take into account your recipient’s tone, allowing it to suggest more formal reactions in interaction with your boss, for example.
When launching through Google Labs in July, Gmail’s upgraded smart North will be available on web, iOS and Android in English.
Google is testing a new feature, which lets you upload yourself a full-lengthy picture to see how shirts, pants, clothes or skirts can look on you. It uses an AI model that “understands the nuances of human body and clothing.”
Google will soon let you shop in AI mode, as well as a “agent checkout” feature that can buy products on your behalf.
If Chrome suggests that your password has been compromised, Google says that the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launched later this year, and Google says it would always ask for consent before changing your password.