Apple introduced its long-awaited mixed reality headset named “Vision Pro” on Monday, making it the tech giant’s first major product launch since the release of the Apple Watch in 2014. When it releases in early 2024, the gadget will cost $3499 and is intended for developers and content makers rather than typical users. So, as sci-fi as it seems, the headgear might usher in a new age not only for Apple but for the whole industry. Apple bills the Vision Pro as the world’s first spatial computer, but what exactly does it do?
As a result, we’ve simplified the science underlying the Vision Pro headset.
What is Apple’s Vision Pro?
By integrating a technological overlay onto your real surroundings, Apple Vision Pro takes the digital into the real world. When you put on the headgear, which looks like a pair of ski goggles, the Apple experience you’re probably accustomed to from using iPhone Application Development or Mac computers is transported out into the real world.
But it’s not quite that straightforward. As a result, the Vision Pro follows in the footsteps of many other Apple gadgets, with a plethora of complicated technology powering what seems to be a straightforward user interface and experience.
” Almost every aspect of the system required invention when we built our first spatial computer.”
How Does The Headset Work?
It would be wise to understand what the headset does before we go into how it works. Thus, Apple’s new vision operating system is rendered in three dimensions using a built-in display and lens system in the mixed-reality headset. Users of Vision Pro may interact with the operating system using their hands, voice, and eyes.
Infrared cameras included in the headset will detect your eye movements, allowing the devices to update the internal display based on how your eye moves to imitate how the image of your surroundings would change depending on the motions.
Since the wearer’s eyes are visible in the advertising films, it may appear that the Vision Pro employs transparent glass and applies an overlay over it, similar to the now-defunct Google Lens, however, this is not the case. Because of the exterior display that shows a live feed of your eyes, the eyes are thus visible from the outside.
According to research, the Vision Pro will utilize a total of 23 sensors, including 12 cameras, 5 sensors, and 6 microphones. It will make the viewer feel as though they are viewing the real environment while really providing them with a “live” simulation of it using these sensors, the new R1 processor, two internal screens, and a complicated lens system.
According to Apple, the R1 processor was developed to “eliminate lag” and motion sickness. As a result, the devices have the more traditional M2 chip for the remainder of the computing process that will really drive the programs you use with gadgets.
To simulate how the image of your surroundings would change depending on the motions, infrared cameras built into the headset will detect your eye movements and adjust the internal display accordingly.
As a result, the headgear also has downward-firing outside cameras. They will monitor your hand in order for you to engage with VisionOs through gestures. Additionally, there are LIDAR sensors on the outside that will continuously monitor the environment around the Vision Pro.
What is The Science Behind Vision Pro?
We live in a three-dimensional world and see everything in three dimensions, but did you know that our eyes can only see things in two dimensions? The depth we see is simply what our brains have learned to accomplish. It uses two slightly different pictures from each eye and applies its own processing to create the illusion of depth.
The two screens in the Vision Pro, presumably, will take advantage of this processing by our brain by presenting two slightly different pictures, fooling our brain into believing it is viewing a three-dimensional image. Once the brain is deceived, the person is tricked, and the user is now seeing in 3D Design.