While VR headsets making headlines, AR capable devices invaded our pockets mostly unnoticed. Smart glasses are also just around the corner. As augmented reality is already here, we’re about to see their applications explode in the future, making a way more intuitive form of communication and user interface ubiquitous.

However, there is one Achilles heel of these AR devices. Where most of them are falling short today is the most critical for immersive experience: performance.

Mobile CPUs and GPUs are delivering decent performance, but they are not even close to what is possible today with a state of the art PC. As a reference for the possibilities of visual fidelity, see the tech demo of the new Unreal Engine 5 running on a PlayStation 5:

Sooner or later these capabilities will be available on mobile devices. Bulky smart glasses will eventually evolve into a form factor of a regular glass, while also increasing their visual and computing performance as cell phones evolved from clunky bricks to smart and pretty all in one devices.

But this will take time.

Is it faster with 5G?

There are rumors that the the Apple Glasses will come with 5G connectivity – and also there are several smartphones supporting it already. That makes a lot of sense, and to be fair, all of the AR devices should follow. Why? 5G’s big deal isn’t the download speed, but the latency: it is possible to safely drive a car remotely with 5G because the lag in the connection is so minimal.

Also, WiFi6 is here with even smaller latency. (See more on how 5G and WiFi6 complement each other here.)

But how is all of these shaping the future of augmented reality?

The cloud makes the future bright

Nvidia announced CloudXR, which means that ray traced rendering can be accessed from basically anywhere. It is not that hard to connect the dots from here: if AR devices are able to connect with 5G to a cloud that is capable of rendering cinematic quality visuals in realtime, they don’t even need to have considerable built in computing power nor graphical performance.

Stop for a second and just let it sink: the AR device tracks its environment, sends the data to the cloud, the cloud renders the fully detailed 3D model with ray tracing, sends the HQ video feed back, and the device renders it on the screen, all of these done in a few miliseconds, quicker than the frame rate (!), so the user experience is seamless…

And this is not science fiction. This technology is not the mainstream practice today, but it is already here:

Spatial awareness

Apple debuted a new LiDAR sensor in the latest iPad Pro, enabling it to map its environment in a high level of detail in real-time. See how it works.

More and more Android phones are also equipped with depth sensors (ToF) and Google’s ARCore is also capable of mapping the sorroundings. Creating the detailed 3D model of the environment on the fly as well makes it possible to occlude the AR content and interact with it.

Future possibilites

Just one step from here to record the texutre data of the environment as well. Incorporating into the Cloud powered RTX rendering enables it to be reflected on the AR 3D content shiny surfaces and refracted on the transparent ones blending realities on a level that will be challenging to tell them apart.

Spatial anchors can be more precise than ever, opening up brand new frontiers of applications in industry, retail, entertainment, education, and more.

Soon, the gap between the physical and digital world will be bridged beyond our imagination.

The name of the bridge will be augmented reality.

Which one of these technologies makes you the most excited? What could be achived with all of these? Do you have something to add? Share it in the comment section!

Interested in having an app that pushes the boundaries of what is possible today? Check out our solutions and contact us!