6 EXPERIMENTATION RESULTS
FROM IPS DATA
In order to evaluate the working of proposed system
with real data, we used dataset captured from stereo
camera based IPS (Baumbach and Zuev, 2014). Since
the data is real in nature, it is not feasable to evaluate
the absolute surface error of reconstructed model and
actual environment. Therefore Screenshots of such
reconstruction are illustrated in figure 5 and 6 for vi-
sual inspection and evaluation.
7 CONCLUSION AND OUTLOOK
In this paper, we presented a novel approach to ad-
dress the challenges related to 3D depth fusion and
reconstruction with the use of L2 regularization based
recursive fusion framework. We demonstrated that
the proposed system has potential of reducing noise
along with capability of incremental 3D depth fusion.
At current state, implementation of proposed scheme
is purely threads based CPU processing, however fur-
ther implementation is required to extend the frame-
work to utilize latest GPU computation power along
with CPU processing. Furthermore, since the system
handles noise inherently, it would be interesting to in-
tegrate planar simplification techniques for improved
3D reconstruction in future research exploration.
REFERENCES
(2014). ATAP Project Tango Googl. http://www.google.
com/atap/projecttango/. Accessed: 2015-11-22.
Baumbach, D. G. D. and Zuev, S. (2014). Stereo-Vision-
Aided Inertial Navigation for Unknown Indoor and
Outdoor Environments. In Proceedings of the Interna-
tional Conference on Indoor Positioning and Indoor
Navigation (IPIN), 2014 . IEEE.
Chen, J., Bautembach, D., and Izadi, S. (2013). Scalable
real-time volumetric surface reconstruction. ACM
Trans. Graph., 32(4):113:1–113:16.
Christopher Urmson et. al (2008). Autonomous driving in
urban environments: Boss and the urban challenge.
Journal of Field Robotics Special Issue on the 2007
DARPA Urban Challenge, Part I, 25(8):425–466.
Curless, B. and Levoy, M. (1996). A volumetric method for
building complex models from range images. In Pro-
ceedings of the 23rd Annual Conference on Computer
Graphics and Interactive Techniques, SIGGRAPH
’96, pages 303–312, New York, NY, USA. ACM.
Dahlkamp, H., Kaehler, A., Stavens, D., Thrun, S., and
Bradski, G. (2006). Self-supervised monocular road
detection in desert terrain. In Proceedings of Robotics:
Science and Systems, Philadelphia, USA.
Funk, E. and B
¨
orner, A. (2016). Infinite 3d modelling vol-
umes. In VISAPP 2016.
Handa, A., Whelan, T., McDonald, J., and Davison, A. J.
(2014). A benchmark for rgb-d visual odometry, 3d
reconstruction and slam. In Robotics and Automa-
tion (ICRA), 2014 IEEE International Conference on,
pages 1524–1531. IEEE.
Hicks, S. L., Wilson, I., Muhammed, L., Worsfold, J.,
Downes, S. M., and Kennard, C. (2013). A depth-
based head-mounted visual display to aid naviga-
tion in partially sighted individuals. PLoS ONE,
8(7):e67695.
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe,
R., Kohli, P., Shotton, J., Hodges, S., Freeman, D.,
Davison, A., and Fitzgibbon, A. (2011). Kinectfu-
sion: Real-time 3d reconstruction and interaction us-
ing a moving depth camera. In ACM Symposium on
User Interface Software and Technology. ACM.
K
¨
ahler, O., Prisacariu, V. A., Ren, C. Y., Sun, X., Torr,
P. H. S., and Murray, D. W. (2015). Very High
Frame Rate Volumetric Integration of Depth Images
on Mobile Device. IEEE Transactions on Visualiza-
tion and Computer Graphics (Proceedings Interna-
tional Symposium on Mixed and Augmented Reality
2015, 22(11).
Newcombe, R. A., Lovegrove, S. J., and Davison, A. J.
(2011). Dtam: Dense tracking and mapping in real-
time. In Proceedings of the 2011 International Con-
ference on Computer Vision, ICCV ’11, pages 2320–
2327, Washington, DC, USA. IEEE Computer Soci-
ety.
Rudin, L. I., Osher, S., and Fatemi, E. (1992). Nonlinear to-
tal variation based noise removal algorithms. Physica
D: Nonlinear Phenomena, 60(14):259 – 268.
Steinbruecker, F., Sturm, J., and Cremers, D. (2014). Volu-
metric 3d mapping in real-time on a cpu. In Int. Conf.
on Robotics and Automation, Hongkong, China.
St
¨
uhmer, J., Gumhold, S., and Cremers, D. (2010). Real-
time dense geometry from a handheld camera. In Pat-
tern Recognition (Proc. DAGM), pages 11–20, Darm-
stadt, Germany.
Taneja, A., Ballan, L., and Pollefeys, M. (2013). City-scale
change detection in cadastral 3d models using images.
In Computer Vision and Pattern Recognition (CVPR),
Portland.
APPENDIX
Derivation of RFusion
In this section we derive the equations of RFusion,
for the sake of better readability we will simplify the
equations for 2D fusion system rather than 3D fusion
system. Assuming n = support of SDF signal, ˆx is
estimated state of system for particular 3D voxel and
y = new TSDF signal. Then such system can easily be
SIGMAP 2016 - International Conference on Signal Processing and Multimedia Applications
78