The reassuring technology of mixed reality equipment actually has fatal flaws

Mondo Technology Updated on 2024-02-06

Recently, Apple's Vision Pro, which was the first to be launched in the United States, has undoubtedly become the biggest bright spot in the entire consumer electronics industry.

On the one hand, some users who have already bought this MR device have begun to publicly display and discuss its experience on the web. They actively discussed what the Vision Pro can do, and praised the quality of its display that claims to "can't see the pixels at all".

On the other hand, you can also see in related reports that there are also some Vision Pro users who have begun to wear it out and appear in various public places.

Interestingly, if you still remember the ill-will shown by the public when Google Glass first appeared a few years ago, I believe many friends will smile at the current "treatment" of Apple's Vision Pro.

After all, judging from the current relevant reports, everyone generally shows a higher acceptance of the significantly larger and more conspicuous Vision Pro. There have been no reports of evictions from its users, or even reports of people being robbed of it for wearing it in public.

In this case, some reports attribute it to Apple's pre-built iris authentication feature in the Vision Pro. But it's clear that the real key point is that people are now used to being surrounded by cameras and sensors, and they don't "care" too much about it.

However, although consumers may not care, from the perspective of manufacturers, it is impossible for them not to prevent problems before they occur. For example, recent reports suggest that Apple may be considering acquiring a German company called "Brighter AI" in order to use their original "Deep Natural Anonymity 2."0" technology to further ensure the protection of public privacy of its MR equipment.

Specifically, most of the current MR and AR devices will be equipped with AI-based privacy protection functions. When these AI functions recognize private content such as passers-by's faces or license plates in the camera, they may automatically "code" these contents, so as to avoid suspicion. Obviously, such a feature ends up giving the user a very unnatural experience.

And "Deep Natural Anonymity 2The advantage of "0" technology is that when it detects privacy-sensitive content in the lens, it will use AI algorithms to "rewrite" these contents. For example, replacing a license plate number with a meaningless string of characters in the same font, or replacing a stranger's facial features with a "popular face".

In this way, the reality captured by these devices is not full of ugly coding or Gaussian blurring, but at the same time, the private information in it is cleverly eliminated.

At first glance, this seems like a very clever design that will help dispel the wariness of ordinary consumers about AR and MR devices in public. But from another point of view, doesn't such an "advanced function" have a unique shortcoming? Of course there are.

First of all, if you have paid attention to Vision Pro, you will know that like this "full coverage" type of MR device, the user's eyes cannot directly see the outside world. What they see is actually captured by a camera and then displayed on the headset screen.

In this way, it will involve the "access timing" of AI privacy protection algorithms. Because if it still recognizes and processes the real-world images captured by the camera in real time, it means that the "world" seen by the user becomes the picture after "removing private information". To put it mildly, this can lead to you seeing an acquaintance who you don't recognize, or simply not recognizing your license plate number.

To put it more seriously, if you go out wearing MR or AR equipment, you suddenly witness criminals, or even your headset and AR glasses have been stolen. So at this moment, it is obvious that no one wants their device to be turned on for "privacy protection", so that they can't shoot key faces or license plate information.

So from this point of view, similar "AI privacy protection algorithms" are likely to have to add a "panic switch" to MR and AR devices in the future. But once such a switch is in place, is it possible that it can be abused, which will cause new problems?

This article is from the Internet

Related Pages