Since Facebook changed its name to Meta, there has been a lot of buzz about the metaverse and a resurgence of interest in virtual worlds, because although the concept of virtual reality (VR) has been around for a long time, the technology is only now beginning to be truly applied.
The first is The Met**erse, a concept that dates back at least to 1984 when William Gibson wrote his Neuromancer, a concept that has been a regular feature of science fiction.
VR has been a part of pop culture for decades, appearing in films like The Matrix, CyberWorld Wars, and Ready Player One. Since the launch (and occasional failure) of consumer devices like the Oculus Rift, PlayStation VR, Valve Index, HoloLens, and even Google Glass, the technology has steadily moved out of sci-fi and into commercial reality. But there is still a long way to go.
This article discusses virtual reality and augmented reality as a group of three main categories: fully immersive virtual reality at one end, augmented reality (AR) at the other, and in between, the various approaches called "Converged Reality" (MR).
This scope can be thought of as a broader category called "Extended Reality (XR)," which includes the three categories defined above as well as supporting technologies such as haptics and spatial audio.
In the future, XR extended reality may include brain-computer interfaces, odor and temperature feedback, and possibly even taste. For various reasons, these future concepts have not yet appeared, but mainly because there is still a lot of R&D work to be done on the device. It's not clear what the data for sensory interfaces looks like, but at this stage we do have devices and data for AR VR, haptics, and spatial audio, so those are moving forward.
The question we often get asked is, "Why hasn't extended reality taken off yet?".Why isn't XR everywhere?To answer these questions, and discuss how we work toward a future that includes metaverse experiences, we need to look at some of the limitations that exist today.
With AR, the glasses are bulky, awkward, and basically only one style. Remember Google Glass or Snapchat Glasses?If you like this style, that's great. Otherwise, no matter how cool the technology is, you probably won't be wearing them. People need a variety of styles to choose from, so for true versatility, technology needs to be compatible with a wide range of options.
As for VR headsets, the simple fact is that most people don't want to wear headsets for long periods of time. They are heavy and can heat up, so you'll feel hot and sweaty, which will only make you feel uncomfortable.
But they are suitable for short periods of time, such as simulating jumping from an airplane or freediving with great white sharks. But they're not the kind of devices that most people use to ** feature films or play video games for three hours. When talking about AR or mixed reality devices, they can get even clunkier. For example, you'll never see most people wearing HoloLens in public. But that could change as devices become smaller and more comfortable.
The Mixed Reality of the Future Converged reality devices also need more features and a wider field of view to provide more advanced perspective displays for AR applications. Achieving this goal requires more and better cameras, infrared (IR) cameras, or other sensors to create accurate spatial maps that improve the overall quality of experience. Device manufacturers are aware of these challenges and are already working on solutions.
Regardless of the device the user is using, what does the virtual augmented convergence of the real world actually look like?Is it AR, overlaying a different ** into a real-world environment, making a modern city look like the Middle Ages, or changing people's clothes?Or are we talking about a truly virtual representation of the real world, like a digital twin of your city?
And something even more fantastical: a fully immersive virtual environment that simply doesn't exist in the real world. Whatever we're talking about, a lot of computation is required, and the device itself is too small to accommodate all the processing power needed to present these experiences.
In order for glasses and headphones to become smaller, lighter, and more portable, while also being able to handle the required functions, mobile networks must improve. To make devices smaller, have longer battery life, and generate less heat, we need to offload processing power to the edge of the network. This has to be done in such a way that the latency stays at or below the 20ms threshold, because people get sick if they exceed the 20ms latency in VR. Some advanced AR applications where devices track and identify fast-moving objects will require lower latency, down to the 5ms range.
Over time, we'll see less and less computation done on the headset itself. To keep devices moving, our 5G (and 6G) networks will need the ability to handle network throughput, edge computing, and latency;We need transport networks with low latency, low jitter, high bandwidth, and ultra-reliability with no packet loss. We're getting there, but the web can't do it yet.
We need more robust networks, not only because the need to shrink devices drives the need for edge computing, but also because virtual worlds require a lot of graphics processing and rendering. This rendering needs to be done at the edge, and the rendered world is returned to the device and the wearer in near real-time.
Moving graphics processing and rendering to the edge opens the door for devices to become smaller and lighter, but it also lays the groundwork for new innovations in complex rendering that can happen remotely and back to the device. Remote rendering** games and other relatively linear virtual worlds are one thing, whileReal-time renderingOn-site experiences are another matter entirely.
Some devices have tried different models of offloading computing power: the Valve Index is a VR device that connects to a high-performance computer via a wired connection and is primarily used for gaming.
Then a company called Nreal offered a set of AR glasses that used a wired connection to take advantage of the processing power of a smartphone. While both examples use wired connections, they both push us toward applications, devices, and virtual worlds that can be accessed, processed, and rendered over wireless networks.
There is also a technology called Sidelink that is being standardized in 3GPP to allow certain cellular devices to communicate with each other without going through the core network. This has the potential to be useful for VR and AR rendering, as short-range wireless technologies such as Bluetooth are too slow to effectively handle the high bandwidth demands of these applications. These innovations have sparked the potential for devices such as glasses to one day replace mobile phones.
Will Facebook Meta "own" the metaverse?They're going to have a virtual world, they can call it a metaverse, but they're not going to have all the metaverse, just like they have the internet today. The metaverse will be a collection of virtual worlds that we can access, much like the internet, with countless sites available for every conceivable purpose. Some parts of the metaverse may be digital twins of the real world, some parts may be merged versions of the real world with the virtual world, while others may remain completely virtual.
The metaverse will eventually become decentralized and device-independent. And, just like the internet, it requires a set of standards, protocols, and common APIs to ensure it works properly and is highly interoperable. Once this happens, users will be able to access Facebook Met**ESE over a 5G (or 6G) network using a smart device such as a phone, just as easily as you would be able to access Google's virtual world via an AT&T network using a Sony device.
If devices and the world remained largely proprietary, as they do today, growth potential would be limited. Interoperability standards for met**erses will be essential, just as MPEG for **compression and 3GPP for cellular communications. In the virtual world, you will have access to different areas regardless of which provider you are accessing with, and each business will have their own brand-specific experience in the virtual world, just like they do in the real world.
To provide the highest quality experience for the largest number of users, device and network interoperability is critical and must be standardized. Once such a standard is developed, no company has it, just as no company has 3GPP or MPEG.
So, once we get there, how will extended reality be used?We expect gaming to remain an important driver, just as it is today. But there are many other ways in which we can see this technology taking shape.
If we could design a virtual sports bar where any number of matches could be played via a VR device** and change the channel by moving your head to look in different directions;Or, when racing, you can switch the perspective of the immersive experience from the driver's seat to the pits or stands;What if you could simulate diving with sharks, skydiving, or visiting a world-class museum?The possibilities of the metaverse seem endless.
We may be 15 to 20 years or more away from a truly standardized, open metaverse. At the same time, we'll see countless companies experimenting with their own metaverse, like the BIG-M Met**erse proposed by Facebook. But will Facebook Meta have it all in the end?Of course not. Facebook may have a "branded" metaverse, but there will be many metaverses to explore and enjoy.
Based on 3DCAT's stable, efficient, and low-latency cloud GPU real-time rendering capabilities, CloudXR can transform any terminal device (including head-mounted HMDs and networked Windows and Android devices) into a high-definition XR display that can display professional-quality images. 2023 Post Sprint Contest This article "Is the Metaverse the Future of VR Virtual Reality?".The content is provided by3dcat real-time rendering cloud solution providerOrganize and publish, if you need **, please indicate the source and link.