Authors:
Charles Hamesse
1
;
2
;
Hiep Luong
2
and
Rob Haelterman
1
Affiliations:
1
XR Lab, Department of Mathematics, Royal Military Academy, Belgium
;
2
imec - IPI - URC, Ghent University, Belgium
Keyword(s):
Depth Sensing, Depth Estimation, 3D Reconstruction.
Abstract:
For a few years, techniques based on deep learning for dense depth estimation from monocular RGB frames have increasingly emerged as potential alternatives to 3D sensors such as depth cameras to perform 3D reconstruction. Recent works mention more and more interesting capabilities: estimation of high resolution depth maps, handling of occlusions, or fast execution on various hardware platforms, to name a few. However, it remains unclear whether these methods could actually replace depth cameras, and if so, in which scenario it is really beneficial to do so. In this paper, we show that the errors made by deep learning methods for dense depth estimation have a specific nature, very different from that of depth maps acquired from depth cameras (be it with stereo vision, time-of-flight or other technologies). We take a voluntarily high vantage point and analyze the state-of-the-art dense depth estimation techniques and depth sensors in a hand-picked test scene, in the aim of better under
standing the current strengths and weaknesses of different methods and providing guidelines for the design of robust systems which rely on dense depth perception for 3D reconstruction.
(More)