Vision Pro is about to go on the market, what s new about Apple s new product?

Mondo Digital Updated on 2024-02-01

The launch of Apple's latest product, the Vision Pro, has entered a critical phase. According to official sources, the Vision Pro will be available in all U.S. Apple retail stores and U.S. Apple stores on February 2, and China is also expected to be one of the first markets for the product to be launched overseas. In the face of fierce competition in the field of artificial intelligence, Apple is under a lot of revenue pressure in the new year, so what is new about this strategic new product, and can it become a new weapon to promote its revenue growth?

Last year, Apple released a new product with great fanfare, the Vision Pro, a device that can be worn on the head and used, and the real novelty lies in the way users use it. Rather than presenting content to the user through a physical screen, the device uses two tiny, high-resolution displays to project content directly into the user's eyes at very close range. Unlike controlling a computer with a keyboard, mouse, or touchscreen, users primarily control the interface through eye tracking and gestures.

In the same way that the stylus was removed from the screen when the iPhone was released, Apple has removed the physical controls for controlling the computer. The computer senses what the user is interested in by observing the user's eye movements, and then observes the user's gestures to determine the user's next needs.

Just looking at the technology used in the headset or the Vision Pro, it's not new, and there were already a lot of similar products in the industry before the device came out, such as Google's Google Glass, Meta's Quest Pro, and technologies such as Leap Motion and Myo Armband for gesture control. However, no one has ever really combined device and gesture control before.

Apple aptly named the device as a spatial computer, which can display digital images with any physical space around the user as a backdrop. Users no longer need to place the device on their desks or laps, and they can also get rid of the constraints on the size of the perceptible area. This means that technically, users can feel like they're in a theater even if they're in a small space, such as an airplane seat.

What can we do with space computers? Apple's current list of use cases doesn't look like anything new. Users can use the device as if they were using a regular computer or iPad, presenting two-dimensional information on a more flexible and unconstrained display. There is a market demand for this feature, such as when the user is in a small space. The device is also valuable for those who are working with a large-screen monitor. The closest thing to its function in this regard is the giant screen TV, will users be willing to spend $3500 on this kind of TV? The answer is yes. Even Apple is working on a display for up to $6,000 (Pro Display XDR). From this perspective, the cost is not exaggerated based on existing use case products. This strategy also has the advantage of implanting a large number of apps that are already present on iPads and iPhones on the new platform.

But if the Vision Pro is just a better and more convenient display for 2D content, investing so much technical and R&D resources into it seems like a "big fuss". The real question is whether the device can bring a more realistic AR and VR experience to the user, thus justifying strapping the computer to the user's head. The Vision Pro certainly has the technical ability to project 3D objects in the space the user is in, and even present a completely new space in front of the user's eyes. But Apple barely mentioned the words AR and VR in its press conference. In doing so, they define a completely new product, i.e., Vision Pro is not an AR or VR device, nor does it include related technology, but is more like a space computer. If AR and VR play any role in Vision Pro, it's their involvement in the operation of applications on this space computer.

Let's start with a review of the concepts. Augmented reality (AR) is the fusion of the virtual world with the real world and changes the user's perception of their surroundings. For example, Google Glass can display notification information in front of the user's eyes through smart glasses. Vision Pro projects a fixed 2D display in the environment the user is seeing, so that the position of the display does not change when the user's head is moved, giving the user the feeling of being in the middle of it. This is possible because Vision Pro can display real-time** that reflects the real world in front of the wearer's eyes. When wearing the Vision Pro, users can't see their surroundings directly, but they will experience a sense of zero distance**. So, technically, Apple is enhancing the user's visual capture of the surrounding environment, rather than overlaying content on top of the physical object that the user has directly reached. For users, there is no difference.

Virtual reality (VR) is the process of immersing users in an immersive virtual environment. Vision Pro captures changes in the user's eye gaze, so in theory, the user feels as if they are immersed in a virtual environment. Vision Pro provides users with multiple modes of experience, in which the content presented by VR technology in front of the user's eyes gives the user the illusion of being placed in a real space. But when the other modes are selected, the content in front of the user's eyes also changes, making the user feel like they have moved to a completely new space, and the content presented changes from the physical environment to the digitally created 3D environment. From this point of view, the Vision Pro is clearly a VR device.

It's worth noting that while the Vision Pro has both AR and VR capabilities, Apple hasn't talked much about these technology use cases. That is, Apple produced a device capable of both, but did not find a classic, representative product in both areas, which is one of the reasons why Apple chose to release the Vision Pro at its annual developer conference. Apple needs software programs, and it needs the public to bring inspiration to programming through imagination.

In a recent article, we outlined the ways in which AR and VR applications can create unique values for users. In addition to its applications in gaming and entertainment, we are paying more attention to its use in the economic sector, especially those that can improve user productivity. In this regard, we ask the question: which AR and VR applications can help users improve their decision-making capabilities to create real value? For those looking to develop apps for the Apple platform, it's crucial to understand the possibilities of technology and applications.

When making decisions, decision-makers mostly face a certain degree of uncertainty. Information is a good way to deal with uncertainty, and sufficient information allows decision-makers to have a more comprehensive judgment about things, so as to avoid judgment errors as much as possible. But there are two things we need to consider when using information for decision-making. First, managers need to have access to accurate information. Second, leaders need to have enough cognitive space to refine and parse this information so that it can be used for their own purposes.

AR and VR have proven to work on both fronts. VR can provide users with more relevant information, especially when they are not available or have high access to information. VR devices can immerse users in new environments, giving them direct access to first-hand information. For example, VR devices can show users a realistic view of the inside of a building in the event of a fire, or provide a flight simulator-like safety simulation environment that allows trainees to get the best training results without taking high risks.

In contrast, AR analyzes the current information of the given environment and then generates relevant information. For example, when a user meets someone in a meeting, an AR device can provide information about their identity. Or if the user encounters a fire, the AR device can provide an emergency route map according to the actual scenario to help the user find an escape route. In each case, the goal of the AR device is to extract the amount of information from the user's given environment and generate useful information to present in front of the user's eyes. It's important to note that the Vision Pro is not a laptop and the device cannot be used in scenarios outside of the home or workplace, so it cannot be used in outdoor environments, such as navigating users while driving.

We can also see why many of the previous AR and VR use cases were of low value. For people attending virtual meetings via Zoom**, beautifully decorated virtual rooms and avatars don't provide them with significantly valuable information. The same is true for AR glasses, although the device can push text notifications as the user moves around, it does not reduce the cognitive load, but rather increases the burden on the user. Our framework shows that the best use cases for VR and AR technologies will only occur in specific contexts. In scenarios where the cost of information acquisition is extremely high or extremely dangerous, the value of VR can be highlighted, and in scenarios with extremely complex environments, AR simplifies the application scenarios through digital overlays. Some scenarios can have both, such as the development and flight testing of new aircraft, architectural prototyping, or remote assistance with medical operations. The Vision Pro has demonstrated a remarkable ability to perform such tasks, but Apple has left the work of exploring and designing relevant use cases to others. For developers looking to monetize Apple's platforms, it's best to focus on developing apps that are expensive to access and require high accuracy.

This was to be expected when Apple first launched a device. The iPod was originally just a digital Walkman, the iPhone was an iPod, the iPad was an enlarged version of the iPhone, the Apple Watch was just a more powerful smartwatch, and the Vision Pro was not just a 3D screen that was not limited by usage scenarios. In the previous case, Apple allowed developers to innovate and make the device break through the initial use scenario and gain features that were not present in the original design. Vision Pro is a new and beneficial attempt on the broad road of technology research and development.

Joshua Gans Abhishek Nagaraj |Wen.

Joshua Gans is the Jeffrey S. S. S. Kor Professor of Technology Innovation and Entrepreneurship at the Rotman School of Management, University of TorontoSkoll Chair), Chief Economist at Creative Destruction Lab, and co-author of Power and **: The Disruptive Economics of Artificial Intelligence. Abhišek Nagaraj is a member of UC Berkeley Haas School of BusinessAssistant Professor.

Chang Minxiao |Edit.

Why are curious people often not treated well by their leaders?

What is the best strategy for launching a new product?

Related Pages