
Over the years, with the introduction of Apple Vision Pro and Meta Quest 3, I have become a believer in the ability of mixed reality.
First, and it was a great concern for me, it is possible to use VR headset without barfing. Second, some applications are really amazing, especially entertainment. While the ability to watch a film on a huge screen is terrible, the completely immersive 3D experiences on Vision Pro are really quite compelling.
In this article, I am going to show you a technique that has the ability to have the capacity of obsolete VR devices such as The Vision Pro and Quest 3. But first of all, I want to remember an experience that I had with Vision Pro, which had a reality-transit effect.
Then later, when we discuss Stanford Research, you will see how they can do something like I experienced and took it beyond the next level.
Too: These XR glasses gave me a 200 inch screen to work
There is a vision pro experience called Wildlife. I saw the Rhino episode from the beginning of 2024, which told the story of a wildlife refuge in Africa. While watching, I really felt as if I could reach and touch animals; They were close to that.
But here is the place where it becomes interesting. Whenever something on TV shows something that I am really in real life, my brain has an internal dialog box pop up that says, “I have gone there.”
Therefore, some time later when I saw the Vision Pro episode on Rhino Sharan, we saw a news about the place. And will you not know it? My mind said, “I have gone there,” even if I never went to Africa. Some about VR immersion indexed that episode in my brain as a real living experience, not just something that I saw.
To be clear, I knew that it was not a real experience at that time. I currently know that it was not a real -life experience. Nevertheless, some small parameters of the inner brain still indexed to the living experience table instead of the experience table.
Too: I finally tried Samsung’s XR headset, and it defeats my Apple Vision Pro in a meaningful way
But there are some widely known problems with the Vision Pro. It is very expensive, but it is not just that much. I one. I bought it to be able to write it for you. Even though I have here and movies are very less terrible on it, I only use it when I have to do it for work.
Why? Because it is also very uncomfortable. It is like strap a brick on your face. It is heavy, hot and so much infiltration that you cannot even take a sip of coffee when using it.
Stanford Research
All that bring us some Stanford Research that I covered for the first time last year.
A team of scientists led by Gordon Wetzstein, a professor of electrical engineering and director of Stanford Computational Imaging Lab, is working on solving both immersion and comfort using Hallography instead of TV technology.
Using a combination of optical nanostructures, the team managed to manufacture a prototype device, called Wavegides and AI, called AI. By controlling the intensity and phase of light, they are capable of manipulating the light at the nano level. The challenge is making real-time adjustment to all nano-light sequences based on environment.
Too: We tested the best AR and MR glasses: Here is how Meta Ray-Bans Stack
All of them took a ton of AI to improve the image formation, adapted to Wavefront Herfer, handle the wild complex calculation, demonstrated pattern recognition, signed with thousands of variables involved in light proliferation (phase change, intervention patterns, diffraction effects, and more), and then correct for dynamic changes.
Add super-micro level managing super-macro level to real-time processing and adaptation made at the level, processing machine learning and refining frequent holographic images, handling non-educational and high-dimensional data that comes from dealing with surface supplements, and then it works with optical data, locally data and environmental information.
it was very. But this was not enough.
Visual turing test
The reason for mentioning the first rhino in this article is that the Stanford team has just released. New research report Published in Nature Photonics Journal, showing how they are Trying to cross The perception of reality is possible with screen display techniques.
In the 1950s, Digital Pioneer Alan Turing suggested what is known as a turing test. Originally, if no human can tell that the machine is a machine or human at the other end of the conversation, that machine is asked to pass the turing test.
The people of Stanford are proposing the idea of a visual touring test, where a mixed reality tool will pass the test if you cannot tell if you are seeing whether you are real or computer-related.
Also: The Day Reality became unbearable: Apple Apple’s AR/VR
Separating all the bad dreams of Uber-Deep fake and my short story, the Stanford team says no matter how much high-resolution stereoscopic LED technology is, it is still flat. The human brain, they say, will always be able to separate the 3D represented on a flat performance from true reality.
As it can be real, there is still a supernatural valley that allows the brain to understand.
But Hallography sheds light in the way physical objects do. Scientists at Stanford say they can create a holographic display that produces 3D objects that are every bit as dimensional as real objects. By doing this, they will pass their visual turing tests.
“A visible turing test means a postdoctoral scholar in Wetzstein’s lab and the first writer of the paper, ideally, no one can distinguish between the physical, real thing as is being seen through the glasses and a digitally made image is being introduced on the display surface.”
Too: Meta launched just $ 400 Xbox version Quest 3s Headset – and it is full of surprise
I am not sure about this. Yes, I support the idea that they will be able to produce eyewear that bends light to repeat the reality. But I wear glasses. There is always a circumference outside the edge of my glasses that I can see and understand.
As long as they make headsets that block that peripheral vision, they will not really be able to follow reality. This is probably notable. Meta Quest 3 and Vision Pro wrap around the eyes. But if Stanford aims to make holographic glasses that feel like normal glasses, then peripheral vision can complicate cases.
In any case, let’s talk about how far they have come in a year.
That was then, this is now
Let’s start by defining the technical term “énder”. As DictionNaires Le Robert And translated into English by Goog, Étendue “is the property of bodies to be located in space and to occupy its share.”
Occular scientists use it to combine two characteristics of a visual experience: the field of view (or how wide an image appears) and ibox (the area in which an effigy can move and still see the entire image).
A larger énder will provide both a wide area and allow the eye to rotate enough to the real life, while looking at the image still generated.
Since we reported on the project in 2024, the Stanford team has increased the scene (Fov) from 11 degrees to 34.2 degrees horizontally and vertically by 20.2 degrees.
It’s still crying far away 110 EG degree horizontal and 96 – degree vertical FOV of Quest 3Or even approximate Vision Pro K 100 RegiondAbsolutely, Each of the human eyes has an area About 140 degrees and, when combined, give us a vision of about 200 degrees.
Too: This AR headset is changing how surgeons see inside their patients
This year, the team developed a custom-designed angle-encoded holographic wavegide. instead of Surface relief (SRGS) used in 2024, new prototypes are formed Volume Brag Gratting (VBG). VBGs prevent “world-side light leakage” and visual noise that can degrade the contrast in previous designs, and they also press stray lights and ghost images.
Both SRG and VBG are used to control how light bends or partitions. SRGs function through a small pattern on the surface of a material – the light bounces from that surface. VBG provides changes inside the material and reflects the light or filter how the internal structure interacts with light waves. VBGs essentially provide more control over light movement.
Another major element of the latest prototype is MEMS (Micro-Electromachanical System) mirror. This mirror has been integrated into the light module with a colloized fiber-wagged laser and holographic wavegide, which we discussed above. It is another tool for steering light, in this case the spatial light is a phenomenon of light angle on the renunciate (SLM).
This, in turn, calls the team “synthetic aperture”, which has the advantage of increasing ibox. Remember that the larger the ibox, the greater the user’s eye can move while using the mixed-coverage system.
Too: HP only turned the Google Beam’s hologram call into reality – and you can buy it this year
The AI continues to play an important role in the dynamic functionality of performance, compensate for real -world situations and helps to create a spontaneous impression of a mixture of real reality and manufactured reality. The AI image adapt to the three and aliosity of the quality and holographic images.
Last year, the team did not specify the size of the prototype eyewear, except that it was smaller than the typical VR display. This year, the team says that they have obtained a total optical stack thickness “of less than 3 mm. In contrast, the lenses on my everyday glasses are about 2 mm thick.
“We want it to be compact and lighter for the entire day’s use, basically. This problem is number one, the biggest problem,” said Wetzstein.
Trilogy
The Stanford team has described these reports as a trilogy on their progress. Last year’s report was Volume One. This year, we are learning about their progress in volume two.
It is not clear how far the volume three is, which the team describes as the real world deployment. But with the improvement they are doing, I am guessing that we will see some more progress (and possibly volume in four and five) soon, rather than later.
Too: I wore Google’s XR glasses, and they used to beat my Ray-Ban Meta in 3 ways already
I am not fully sure that to combine reality with holographic images at the point where you cannot give the difference is healthy. On the other hand, the real reality can be very disturbing, so the construction of our own bubbles of holographic reality can provide a relief (or a new pathology).
All this is just so strange and sometimes so scary. But this is the world in which we live.
What do you think about the idea of “visual turing test”? Do you believe that Holographic display can actually make the brain stupid in thinking that digital imagery is real? Have you tried any existing and mass mixed reality headset like The Vision Pro or Quest 3? How emergent did they feel? Do you think Stanford’s Wavegide-based holographic approach can remove the obstacles of comfort and realism that leave behind the mainstream XR adoption? Let us know in the comments below.
Get top stories of morning with us in your inbox every day Tech Today Newsletter.
You can follow the day’s project updates for my day on social media. Be sure to subscribe to My weekly update newsletter And follow me on Twitter/X @DavidgewirtzOn Facebook Facebook.com/davidgewirtzOn Instagram Instagram.com/davidgewirtzOn blue @Davidgewirtz.comAnd on youtube Youtube.com/davidgewirtztv,

