Adeia Blog

All Blogs

March 30, 2022

The Evolving Ecosystem of Immersive Experiences

The Evolving Ecosystem of Immersive Experiences

Many of the online experiences we have today were unimaginable just a generation ago. Who could have foreseen, for example, that anyone could make money by having other people watch them play video games over a platform called Twitch? Who could have envisioned that TikTok would play a role influencing geopolitics?

While many of these experiences remain in the domain of flat screen displays delivered by major social media platforms, we are heading toward a more immersive online world where 3D technologies will usher in profound changes to the online experiences we have every day.

What Is an Immersive Experience?

Before we talk about the technology powering these changes, it’s important to ask: what makes an experience immersive? The word “immersive” gets thrown around a lot, but it’s a challenging concept to quantify. Basically, if you feel immersed, it’s probably safe to consider it an immersive experience.

When you ask most people about times when they have felt truly immersed, they will often talk about being in nature (hiking in the woods, fishing by a river, stargazing), at social events (parties, concerts, sporting events), being creative (making music, dancing, painting) or being athletic (working out, running, playing tennis).

These types of experiences – and many others – are immersive because they focus our minds completely on the activity and the environment. They are often forms of escape, rejuvenation, and recreation. They help us feel more alive, more human.

Over the past couple of decades, many technologies such as gaming, augmented reality (AR) and virtual reality (VR) have been developed to help us have more authentically immersive experiences online. As these technologies reach more genuine levels of authenticity and immersivity, they will allow us to interact with others in more meaningful ways that feel “real”.

How to Enable Immersivity

My colleague Chris Phillips recently wrote about some of the devices that will power the next generation of virtual reality (VR) and the much-hyped Metaverse, but today, I want to talk about what has to happen on the network side of things in order for these experiences to come to fruition, because we are going to need much more from our networks than we have ever needed in the past.

Historically, we required our networks to give us more bandwidth, and to give us access to the Cloud (an abstract entity/concept that exists outside of the service provider’s network). We needed the provider to serve us access to these things in latencies of anywhere between 200 and 500 milliseconds.

If you’re streaming Netflix, for instance, you probably won’t notice if the latency is 500 milliseconds, because you don’t expect that experience to be immersive. The experience is essentially opening a one-way pipe of bandwidth so you can watch the latest episode of your favorite show. What is changing today are the requirements we will have for future applications. Instead of the relatively simple requirements for bandwidth and Cloud access, the model gets more complex and begins to resemble an ecosystem with three main components:

  1. High Bandwidth: Immersive devices will consume a lot more data (a single VR session, serving up 2K resolution per eye at 90 frames per second for 90-degree Field of View (FoV), for instance, will consume somewhere between 50 and 100 megabits per second). As resolution, frame rate and Field of View increases, data rates will increase, and when we get to human eye resolution VR, we’ll need a lot more than 100Mbps for those experiences to approximate a real-world experience.
  2. Low Latency: An immersive 3D experience is one that you are actually interacting with. Even if you’re not specifically controlling an actor, or affecting or changing the environment, at a minimum, you will be changing your viewpoint within it. Every time you move your head or change where you are looking, the compute system needs to record a new position in the 3D space, render that new view and serve it up to you. Humans feel nausea in VR experiences if the latency is more than 20 milliseconds, and some feel it at even lower levels than that. If the Cloud is 60 or 100 milliseconds away, this will be a huge problem.
  3. A High Level of Network-Based Compute: Traditional multimedia systems are based on computers that perform simple encoding and decoding. New 3D systems have considerably more functionality, such as awareness of context, computer vision and depth perception, object identification, speech and text recognition, and viewpoint-based rendering. All of these are compute-intensive on their own, and they all require machine learning as well, which itself requires even more powerful computing resources. This level of computing horsepower, with powerful GPU and heavy matrix multiplication, is usually not practical to put into the end-user device.

How Do We Build This?

First, on the compute side, it’s about cloud-based, edge computing platforms that can serve the needs of immersive experiences. The cloud gaming platforms we have today (such as Amazon Luna, Google Stadia, Microsoft’s xCloud) are examples of the kinds of platforms that can be scaled to support immersive 3D experiences.

These platforms are made of a whole host of machine-learning-based application frameworks that GPU manufacturers are working on, capable of delivering blazingly fast and high-performance compute. These edge computing platforms also require new cloud orchestration technology that has historically been the domain of web-scalers like Amazon, Microsoft, Google and cloud technology providers like IBM and VMWare.

Second, in the bandwidth component of the ecosystem, service providers must work in concert with the compute side to deliver high bandwidth, which was traditionally the domain of cable and fiber providers. Increasingly, cellular will play a role here as well, especially for immersive mobile experiences: 5G is capable of delivering very high bandwidth already.

Cable providers have also begun to deliver technologies to increase the capabilities of their service. DOCSIS 3.1 modems, for instance, can deliver 1 Gbps download speeds, an experience that approximates fiber to the home. DOCSIS 4.0 modems will offer symmetrical multi-gigabit download and upload speeds.

Finally, the low latency component is where we can expect to see a lot happening. Content Delivery Networks (CDNs) like CloudFlare, Akamai, Fastly, and more, as well as Cloud/edge application service providers mentioned above (Microsoft, Nvidia and others) are beginning to densify their internet points of presence (POPs) across the US and other parts of the world, depending on where they are serving the most customers.

This densification means that they are also deploying edge compute in those data centers, which gets the content and service physically closer to more customers and lowers the latency they experience. Cloud providers and internet service providers are forming partnerships that will help improve the delivery of this low-latency edge compute to their customers.

For example, Verizon has partnered with Amazon (AWS) to deliver Multi-access Edge Compute (MEC) for its customers. Internet service providers are also deploying low latency access network technology like 5G mmWave and Low Latency DOCSIS (LLD) to reduce latency to customers.

The development of this ecosystem is a win-win: first, it opens new revenue opportunities for providers and network equipment manufacturers, while changing some of their cost structure in beneficial ways. For example, as network service providers virtualize more aspects of their service, they use more software-based network equipment in their data centers, taking up less space and requiring less power and cooling.

All of this reduces costs. Freeing up space and power consumption allows data centers to retrofit their physical space with additional compute as needed, which in turn means greater capacity to deliver low latency edge compute experiences. As we head toward the metaverse and a world of more immersive online experiences, the growth and development of this network-side ecosystem will be a vital part of the overall evolution of the technology.

A Review of 3D & Systems Summit 2024

Thomas Workman Wins "Best Paper 2023" at ECTC

Increasing the Throughput of Hybrid Bonding Inspection Using Novel Methods and Machine Learning

Adeia Presents Hybrid Bonding Technology at 2024 LES Silicon Valley Conference

Dhananjay Lal

Vice President of Advanced R&D

Dhananjay (DJ) is responsible for roadmap definition, strategy and R&D activities in Adeia’s Media CTO office. Prior to Adeia, DJ was Senior Director for Emerging Technologies and Platforms at Charter Communication, where he built an R&D team focused on network-powered Gaming, AR/VR, holographic / light field communication and ML/AI applied to Quality-of-Experience delivery on the network. He has held positions across research, product engineering and product management across various organizations like Time Warner Cable, Eaton, Emerson and Bosch. He also served as Board Member and Network Architecture Workgroup Chair at the Immersive Digital Experiences Alliance (IDEA) and has 16 issued U.S. patents. DJ has a BE in Electronics and Communication Engineering from Indian Institute of Technology, a Ph.D. in Computer Science from University of Cincinnati and an MBA in general management from Carnegie Mellon University.