Google I/O 2025, Google’s largest developer conference of the year, is held on Tuesday and Wednesday at Mountain View at Amphetheater. We are bringing you the latest updates from the event.
I/O displays product announcements from Google’s portfolio. We have received a lot of news related to Android, Chrome, Google Search, YouTube, and indefinite-Google’s AI-operated Chatbot, Gemini.
Google hosted a separate program dedicated to Android updates: Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its advanced safety program, safety equipment to protect against scams and theft, and a new design language called material 3 expressions.
Here are all things announced in Google I/O 2025.
Gemini Ultra
According to Google, Gegiini ALTRA (now only in the US) “uses the highest level” for Google’s AI-operated apps and services. It is priced at $ 249.99 per month and includes Google’s VEO 3 video generator, the company’s new flow video editing app, and a powerful AI capacity called Gemini 2.5 Pro Deep Think Mode, which has not been launched yet.
AI Ultra comes with a high border in Google’s Notebooklam platform and whisk, company’s image remixing app. AI Ultra subscribers also have access to Google’s Gemini Chatbot in Chrome; Some “agentic” equipment operated by the company’s project merrinner tech; YouTube premium; And 30TB storage in Google Drive, Google Photos and Gmail.
Deep thinking in Gemini 2.5 Pro
Deep Think is a “extended” logic mode for Gemini 2.5 Pro model of Google. This allows the model to consider several answers to the questions before answering, increasing its performance at some benchmarks.
Google did not explain the deep think tasks, but it can be similar to OPENAI’s O1-PRO and the upcoming O3-PRO model, which possibly use an engine to find and synthesize the best solutions for a given problem.
Deep Think is available for “reliable testers” through Gemini API. Google said that it is taking extra time to evaluate safety before rolling out the deeper idea.
VO3 video-generating AI model
Google claims that Veo 3 can generate dialogue to make sound effects, background noise, and even videos. Google says that Veo 3 also improves its predecessor, Veo 2.
VEO 3 Google’s Google’s $ 249.99-Mane-Mahine AI AI Ultra Plan is available in Google’s Mithun Chatbot app from Tuesday for customers from Tuesday, where it can be indicated with text or an image.
Imagene 4 AI image generator
According to Google, the imagene is 4 fast – faster than imagene 3. And it will soon intensify. In the near future, Google plans to release a type of imagene 4 that is 10X faster than imagene 3.
According to Google, imagene is capable of providing “fine details” such as 4 clothes, water drops and animal fur. It can handle both photorialistic and abstract styles, which can create images in a range of aspect ratio and up to 2K resolution.
Both VO3 and imagene 4 will be used for power flow, the company’s AI-managed video tools will be extended towards filmmaking.

Gemini app update
Google announced that Mithun apps have more than 400 monthly active users.
Mithun Live camera and screen-sharing capabilities will roll for all users on iOS and Android this week. This feature operated by Project Estra, allows people to have orally oral conversations with Gemini, while also performs video streaming from their smartphone’s camera or screen to AI model.
Google says that Mithun Live will also start integrating more deeply with its other apps in the coming weeks: it will soon be able to provide guidelines from Google Maps, create events in Google calendar and make two-do list with Google tasks.
Google says it is updating intensive research, AI agent of Gemini who generates a complete research report by allowing users to upload their private PDF and images.
Stiching
Stitch is an AI-managed tool to help people design the web and mobile app front end by generating essential UI elements and codes. Stitch can be motivated to create an app UI with certain words or even an image, which provides HTML and CSS markup for designs that produce it.
What stitch can do than some other vibe coding products is slightly more limited, but adaptation options have a reasonable amount.
Google has also expanded access to jules, the purpose of its AI agent is to help developers to fix bugs in the code. The tool helps developers to understand complex codes, make bridge requests on Github and handle some backlog items and programming tasks.
Project Meriner
Project Meriner is the experimental AI agent of Google that browses and uses websites. Google says that it has been quite updated how the project merrinner works, allowing the agent to take about a dozen tasks at a time, and is now rolling it for users.
For example, Project Meriner users can buy tickets in a baseball game or buy grocery items without going to a third-party website. People can only chat with Google’s AI agent, and it goes to websites and takes action for them.
Project Estra
Google’s low delays, multimodal AI experience, project estra, an array of new experiences in search, will strengthen the products of Mithun AI app and third -party developers.
Project Estra was born as a way to demonstrate multimodal AI capabilities almost real -time from Google Deepmind. The company says it is now building those project Estra glasses with partners including Samsung and Warbi Parker, but the company does not yet have a launch date.

AI mode
Google is rolling out AI mode, experimental Google Search feature that allows people to ask complex, multi-part questions through AI interfaces for users this week.
The AI ​​mode will support the use of complex data in sports and finance questions, and it will provide the “try it” option for apparel. Search for live, which is rolling later in this summer, let you ask questions on this basis what your phone camera is looking at in real time.
Gmail is the first app to be supported with personal reference.
Beam 3D Telecovering
The beam, which is previously called Starline, uses a combination of software and hardware, which includes six-camera arrays and custom light field displays, allowing the user to be explained with someone as if they were in the same meeting room. An AI model changes the video from the cameras, which are posted at different angles and pointing to the user, in 3D rendering.
Google’s beam claims “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meat, the beam provides an AI-manual real-time speech translation facility that preserves the voice, tone and manifestations of the original speaker.
And talking about Google Meat, Google announced that the meat is getting real-time speech translation.
More AI updates
Google is launching Gemini in Chrome, which will provide people access to a new AI browsing assistant that will help them understand a page reference quickly and complete tasks.
Gemma 3n is a model designed to run “smoothly” on phone, laptop and tablet. It is available in preview starting on Tuesday; According to Google, it can handle audio, lessons, pictures and videos.
The company also announced a ton of AI scope facilities in Gmail, Google Docs and Google Vids. Most particularly, Gmail is getting personal smart answers and a new inbox-cleaning feature, while VIDs are getting new ways to create and edit materials.
Video overviews are coming in the notebooklam, and the company rolled out the syntid detector, a verification portal that uses Google’s syntid watermarking technology to help identify AI-related materials. Leria Realtime, AI model that strengthens its experimental music production app is now available through an API.
Wear OS 6
Wear OS 6 brings an integrated font to tiles for a cleaner app look, and pixel watches are getting dynamic themeing that sinks the app colors with watch faces.
The main promise of the new design reference forum is to make developers better adaptation in uninterrupted transitions as well as apps. The company is releasing A design guideline With developers Fig design files,

Google play
Google subscription, to handle the subject pages to the play store for Android developers with new tools, so that users can dive into specific interests, people can dive into specific interests, people a new checkout experience to make an audio samples to look into a quietly in app material, and ad-on smooth.
The “topic browse” page for films and shows (us only now) will connect users to tones and apps tied to tones. In addition, developers are receiving dedicated pages for testing and release, and equipment to monitor and improve their app rollouts. Developers using Google can now prevent live app release if a significant problem pops up.
Subscription management tools are also getting upgrade with multi-product checkouts. The gods will soon be able to offer the main membership as well as the membership add-on, all under one payment.
Android Studio
Android Studio is integrating new AI features, including “travel”, a “agent AI” capacity that matches the release of Gemini 2.5 Pro model. And a “agent mode” will be able to handle over-over-over-development processes.
Android studios will get new AI capabilities, which include an increased “crash insight” in the App Quality Insights panel. This improvement by Gemini will analyze the source code of an app to identify the potential causes of accidents and suggest the fix.