Hardware and the Metaverse

Consumer Hardware

Every year, consumer hardware benefits from better and more capable sensors, longer battery life, more sophisticated/diverse haptics, richer screens, sharper cameras, etc. There are also increasing numbers of smart devices such as watches and VR headsets (and, soon, AR glasses). These advances increase and extend the user’s immersion even though the software does not deliver the actual experience.

As an example, take Bitmoji, Animoji and Snapchat AR, which are live avatar apps. These applications require sophisticated software and CPUs/GPUs that are quite capable (see Section #3). They also need and are enhanced by a powerful face-tracking camera, and sensor hardware that is constantly improving. The infrared sensors on newer iPhone models can now detect 30,000 points of your face. This is most commonly used to identify your face, but it can also be connected to Epic Games’ Live Link Face app, which allows anyone to create and stream a high-fidelity, Unreal Engine-based avatar. Epic will use this functionality to live-map a Fortnite player onto their ingame character.

 

Apple’s Object Capture allows users to quickly create high-fidelity virtual objects from photos taken with their standard-issue iPhone. These objects can be used to create virtual environments and reduce the cost of synthetic goods. They can also be overlayed onto real environments for art, design and other AR experiences.

Learn more:

New ultra-wideband chips in many new smartphones, including the iPhone 11 (and 12), emit 500,000,000 RADAR pulses per seconds and have receivers that process return information. Smartphones can create extensive RADAR maps that cover everything, from your home to your office and down to the street you are walking down. Your home can be opened from the outside but locked from the inside. You can navigate your entire home with a live RADAR map.

It is amazing that all this can be achieved with standard consumer-grade hardware. This functionality is becoming more important in everyday life, which explains why the iPhone’s average price has risen from approximately $450 in 2007 and is now over $750 in 2021.

Another great example of hardware innovation is the XR headsets. The resolution of the first Oculus consumer (2016) was 1080×1200 per eyes. However, the Oculus Quest 2 (four years later) had an equivalent resolution of 1832×1920 per one eye (roughly equivalent in 4K). Palmer Luckey, Oculus founder, believes that VR must have a resolution of at least twice the size to overcome pixelation. Oculus Rift’s peak refresh rate was 72hz. The most recent version achieves 90hz and 120hz when connected via Oculus Link to a gaming computer. Many believe that 120hz is the minimum frequency to avoid nausea and disorientation in some users. This could be done without a gaming-level computer and tether.

Microsoft’s HoloLens2 display, which covers 52deg instead of the 34deg that humans see on average, is only 52deg. Snap’s new glasses measure only 26.3deg. We will likely need much wider coverage in order to make this a success. These are not software problems, but hardware issues. These advances must be made while improving the quality of hardware within a wearable (e.g. Speakers, processors, and batteries) — and, ideally, shrinking them.

 

Google’s Project Starline is another example. It’s a hardware-based booth that makes video chats feel as though you’re sitting in the same room with the other participant. The booth uses a dozen depth sensors, cameras and a multi-dimensional, fabric-based light-field display and spatial audio speakers. The volumetric data processing and compression are used to bring this to life, and then it is delivered via webRTC. However, hardware is crucial to capture and present a level of detail that “seems real”.

 

Hardware that is not for consumption

Given what’s possible with consumer-grade devices, it’s no surprise that industrial/enterprise hardware at multiples of the price and size will stun. Leica sells photogrammetric cameras with a maximum of 360,000 “laser scanning set points per second” for $20,000, which can capture entire malls, buildings and homes with more detail and clarity than anyone would see in person. Epic Games’ Quixel uses proprietary cameras to create environmental MegaScans that are composed of millions of tiny triangles.

Companies can create high-quality mirror worlds or digital twins of physical spaces. These devices also make it easy and cheap for them to use scans from the real world to create fantasy worlds that are more expensive and of higher quality. Google’s ability in 15 years to capture 360-degree 2D images of any street anywhere on the planet was astonishing. Numerous businesses today have the ability to purchase LIDAR scanners and cameras that can be used to create 3D photogrammetric reproductions from any location on Earth.

 

These cameras are particularly fascinating when they go beyond static image capture or virtualization and allow for real-time rendering of the real world and update. For example, cameras in an Amazon Go store can track multiple consumers simultaneously via code. This type of tracking system can be used in the future to replicate these users in real-time in a virtual mirrorworld. Google’s Starline and other technologies will allow remote workers to be “present” in stores (or museums, DMVs, or casinos) via an offshore ‘Metaverse contact center — or at home in front their iPhones.

You might be able see the virtual representations (or robots) of your friends back home when you visit Disneyland. This allows you to collaborate with them in defeating Ultron and collecting Infinity Stones. These experiences are more than hardware, but they can be constrained, enabled and realized by it.

Source: https://www.matthewball.vc/all/hardwaremetaverse