Depth from motion for smartphone AR

Julien Valentin
Neal Wadhwa
Max Dzitsiuk
Michael John Schoenberg
Vivek Verma
Ambrus Csaszar
Ivan Dryanovski
Joao Afonso
Jose Pascoal
Konstantine Nicholas John Tsotsos
Mira Angela Leung
Mirko Schmidt
Sameh Khamis
Vladimir Tankovich
Shahram Izadi
Christoph Rhemann
ACM Transactions on Graphics (2018)

Abstract

Augmented reality (AR) for smartphones has matured from a technology for earlier adopters, available only on select high-end phones, to one that is truly available to the general public. One of the key breakthroughs has been in low-compute methods for six degree of freedom (6DoF) tracking on phones using only the existing hardware (camera and inertial sensors). 6DoF tracking is the cornerstone of smartphone AR allowing virtual content to be precisely locked on top of the real world. However, to really give users the impression of believable AR, one requires mobile depth. Without depth, even simple effects such as a virtual object being correctly occluded by the real-world is impossible. However, requiring a mobile depth sensor would severely restrict the access to such features. In this article, we provide a novel pipeline for mobile depth that supports a wide array of mobile phones, and uses only the existing monocular color sensor. Through several technical contributions, we provide the ability to compute low latency dense depth maps using only a single CPU core of a wide range of (medium-high) mobile phones. We demonstrate the capabilities of our approach on high-level AR applications including real-time navigation and shopping.

Research Areas