The biggest disclosure of Google I/O was that the company is coming back with its own prototype XR smart glass in mixed reality games. It has been years because we have seen anything enough from the search giant on the AR/VR/XR front, but to go with our XR platform with a health of hardware partners, it seems that it is finally changing.
After Keenote, Google gave me a very small demo of the prototype device we saw on stage. I got only a few minutes with the device, so my impressions are very limited, but I was immediately impressed how to compare the glasses with the Meta Prototype and Snap’s enriched reality glasses. While both of them are quite chunky, Google’s prototype device was lighter and felt too much like a normal pair of glasses. The frame was usually slightly thicker than what I wear, but not completely.
At the same time, there are some notable differences between Google’s XR glasses and meta and snap. Google’s device only has a display on one side – the right lens, you can see it in the image at the top of this article – so the views are more “glacable” than the immersive. I noticed Google’s Demo platform at I/O. The area of view seemed narrow and I can confirm that it seems much more limited than the 46-degree area of SNAP. (Google refused to share the nuances on how broad the field of viewing on its prototype.)
Instead, the display looked similar to the front display of a foldable phone. You can use it to take a quick look at the time and small snipites of information and information from your apps, as if you are listening to the music.
Mithun is to play a major role in Android XR Ecosystem, and Google played me through some demos of AI assistants working on smart glasses. I could see the performance of books or some art on the wall and ask Mithun’s questions what I was looking at. It is felt similar to multimodal abilities that we have seen with project estra and other places.
There were some bugs, however, carefully in orchestrated demo. In an example, Mithun started telling me what I was looking at before I had completed my question, followed by a strange moment where both of us stopped and interrupted each other.
One of the more interesting use cases shown by Google was the Google map in glasses. You can get a head-up scene of your next twist, a lot such as Google-enhanced reality moving instructions, and look down to see a small part of the map on the floor. However, when I asked Mithun how much time it would take to drive San Francisco from his place, it was not able to answer. (This was actually said something like “tool output”, and my demo ended very quickly.)
I really liked how Google took advantage of the onboard camera of glasses. When I took a photo, a preview of the image immediately popped up on the performance to see how it came out. I really appreciated it because it is naturally unexpected to frame photos with a camera on smart glasses because the final image can depend on where the lens is placed. I often wish for a version of this while taking photos with my Ray-Bain Meta Smart Glass, so it was actually good to see a version of it in action.
I honestly asked a lot of questions about Google’s vision for XR and what will be the final Gemini-Integrated Smart Glasses. As I have seen with many other mixed reality demos, it is clearly still very early days. Google was careful to emphasize that it is prototype hardware to show what Android XR is capable, not a device that he is planning to sell soon. Therefore, we can see any smart glasses from Google or its hardware partners very different.
What a few minutes with Android XR were able to show my few minutes, however, how to bring Google AI and mixed reality together. It is not so different from meta, which sees smart glasses as a key to adopt its AI assistant for a long time. But now that Mithun is coming about every Google product present, the company really has a very solid basis to complete it.