Google I/O was filled with 2025 announcements. The problem is that Google is not always clear which features are new, which have already been released, and which are coming out in the future.
While there are a lot of features to look at the horizon, and a number that you are still able to use for some time, there are new features that are rolled out immediately after the announcement of Google. Here are all Google I/O features that you can see right now – although some you need to pay.
Imagene 4

Credit: Google
Google’s latest AI image generation model, imagene 4, is available today. Google was sparse on a lot of specific upgrade with this new model, but says that the imagene is sharp, and now capable of images up to 2K resolution with additional aspect ratio.
The change on which the company focuses the most is typography: Google says that imagene 4 can generate lessons without any common AI errors that you combine with AI image generator. At its top, the model may include various art styles and design options based on the context of the prompt. You can see that in the above image, which uses a pixelated design to match the 8-bit comic strip look for the text.
You can try the latest imagene model through Gemini apps, whisk, vertex AI and working field apps like slides, vids and doors.
AI mode

Credit: Lifehacker
AI mode essentially discovers in a Gemini chat: it allows you to ask more complex and multi-step questions. Google then uses a “Query fan-out” technique to scan the web for relevant links and generate full answer from those results. I have not dives very deeply in this feature, but it works as a large extent advertised – I am not sure that if it all is much more useful than finding through the link itself.
Google has been testing AI mode since March, but now it is available to everyone in America. If you want to use it, you should see a new AI mode option on the right to the search bar on Google’s homepage.
“try it on”

Credit: Google
Online shopping is so In all ways, a person is much more convenient than going to a person, but one: you cannot try on any clothes ahead of time. Once they arrive, you try on them, and if they do not fit, or you don’t like the look, they go back to the store you go.
Google wants to eliminate this from happening (or, at least, too much cut). Its new “TRAI It on” feature scans an image that you provide yourself to get an understanding of your body. Then, when you are browsing online for new clothes, you can choose “Try it”, and will create an image of you wearing Google’s AI article.
This is an interesting concept, but also a bit scary. I personally do not want to analyze Google myself so that it can map a variety of clothing more accurately on me. Personally, I will rather the risk of making a comeback. But if you want to give it, you can do it Try experimental facility in Google Labs today,
Jules
Jules Google’s “Asynchronous, Agentic Coding Assistant”. According to googleThe assistant clones your codebase into a safe Google Cloud Virtual Machine, so that this can execute tasks such as writing tests, building features, genealogs generating, fixing bugs and competing to dependency versions.
The supporting works in the background and does not use your code for training, which is slightly fresh than a company like Google. I am not a coder, so I cannot say to make sure what Jules seem useful. But if you are a coder, you can test it for yourself. To date, Jules are available for anyone as a free public beta, which wants to try it – although Google says uses boundaries apply, and that they will charge for different -platform plans once “platform mature”.
Get speech translations in google

Credit: Google
If you are a Google Western Customer, this next feature is great. As shown during I/O Keenote, Google Meat Now has a live speech translation. Here’s how it works: Suppose you are talking to someone on Google Meat Calls that speaks Spanish, but you only speak English. You will hear the other collar speaking in Spanish for a moment, before the AI ​​voice dubbed them with translations in English. After you speak, they will get the opposite on their end.
Google is working on adding more languages ​​in the coming weeks.
Google ai ultra subscription

Credit: Google
The city has a new membership, although it is not for the unconscious of the heart. Google announced a new “AI Ultra” membership in I/O yesterday, which costs a cost of $ 250 per month.
This extraordinary value tag comes with some major AI features: you have access to the highest range for all AI models of Google, including Gemini 2.5 Deep Think, VO3 and Project Meriner. It also comes with 30TB cloud storage, and, entertaining, a YouTube premium subscription.
What do you think so far?
You should actually have a large believer in AI to leave this membership above $ 3,000 per year. If you have a budding curiosity for AI, perhaps Google’s “AI Pro” plan is high – it is a new name for Google’s AI premium membership, and comes with the same allowances, as well as access to the flow (which I will cover down).
Daring
Veo 3 is Google’s latest AI video model. Unlike imagene 4, however, it is only available to AI Ultra customers. If you are not comfortable spending $ 250 per month on Google’s services, then you have to live with Veo 2.
Google says that VEO 3 is better in real-world physics than Veo 2 and can handle realistic lip-syncing. You can see that in the clip above, which shows a “old sailor” reciting a poem. Their lips actually match speech, and the video is crisp with elements of realism. I do not feel personally that it looks “real”, and it still suggests that it is an AI video, but there is no doubt that we are entering some dangerous water with AI video.
AI Pro Subscribers with VO2 have some new video model capabilities, as well as. Now you have camera control to decide how you want to see shots; Options to adjust the aspect ratio of the clip; Equipment to connect or remove objects with a view; And controls the “outpant”, or to add to the view of a clip.
Flow
Google not only upgraded its AI video model: It also released the AI ​​video editor, called Flow.
Flow lets you generate videos using Veo 2 and Veo 3, but it lets you cut those clips together on a time and control the camera movements of your clip. You can use imagene to generate an element that you want to add to a scene, then ask VO to generate a clip with that element.
I am sure enthusiasts are going to like it, but I suspect. I could have seen it as a useful tool for story boarding ideas, but to make real materials? I know that I do not want to watch full shows or movies generated by AI. Maybe the odd Instagram video has a chakli outside me, but I do not think the reels are the ultimate goal of Google.
The flow is available to both AI Pro and AI Ultra customers. If you have AI Pro, you can reach Veo 2, but can choose between AI Ultra Subscriber Veo 2 and Veo 3.
Mithun in Chrome

Credit: Google
AI Pro and AI Ultra subscribers now have Gemini access to Google Chrome, which appears in the toolbar of your browser window. You can ask the assistant to summarize a web page, as well as inquire about the elements of that web page. There are plans for agentic features in the future, so Gemini can check websites for you, but, for now, you are really limited to two tasks.