the RGB information along with the depth data to el-
evate the 3D model quality.
ACKNOWLEDGEMENTS
This research has been performed within the
PANORAMA project, co-funded by grants from Bel-
gium, Italy, France, the Netherlands, the United King-
dom, and the ENIAC Joint Undertaking.
REFERENCES
Andersen, V., Aans, H., and Brentzen, J. A. (2010). Sur-
fel based geometry reconstruction. In Collomosse,
J. P. and Grimstead, I. J., editors, Theory and Prac-
tice of Computer Graphics, Sheffield, United King-
dom, 2010. Proceedings, pages 39–44.
Besl, P. and McKay, N. (1992). A method for registration
of 3-d shapes. IEEE Transaction on Pattern Analysis
and Machine Intelligence, 14(2):239–256.
Bondarev, E., Heredia, F., Favier, R., Ma, L., and de With,
P. H. N. (2013). On photo-realistic 3D reconstruction
of large-scale and arbitrary-shaped environments. In
CCNC, pages 621–624.
Bylow, E., Sturm, J., Kerl, C., Kahl, F., and Cremers, D.
(2013). Real-time camera tracking and 3D reconstruc-
tion using signed distance functions. In Proceedings
of Robotics: Science and Systems, Berlin, Germany.
Chang, J. Y., Park, H., Park, I. K., Lee, K. M., and Lee, S. U.
(2011). Gpu-friendly multi-view stereo reconstruction
using surfel representation and graph cuts. Comput.
Vis. Image Underst., 115(5):620–634.
Chow, J., Ang, K., Lichti, D., and Teskey, W. (2012). Per-
formance analysis of a low-cost triangulation-based
3d camera: Microsoft kinect system. In ISPRS12,
pages XXXIX–B5:175–180.
Curless, B. and Levoy, M. (1996). A volumetric method for
building complex models from range images. In ACM
SIGGRAPH Conference Proceedings. ACM.
Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D.,
and Burgard, W. (2012). An evaluation of the rgb-
d slam system. In Robotics and Automation (ICRA),
2012 IEEE Int. Conf. on, pages 1691–1696. IEEE.
Engelhard, N., Endres, F., Hess, J., Sturm, J., and Burgard,
W. (2011). Real-time 3D visual slam with a hand-held
camera. In Proc. of the RGB-D Workshop on 3D Per-
ception in Robotics at the European Robotics Forum,
Vasteras, Sweden.
Henry, P., Krainin, M., Herbst, E., Ren, X., and Fox, D.
(2010). Rgb-d mapping: Using depth cameras for
dense 3D modeling of indoor environments. the 12th
Int. Symposium on Experimental Robotics (ISER).
Henry, P., Krainin, M., Herbst, E., Ren, X., and Fox, D.
(2012). RGB-D mapping: Using kinect-style depth
cameras for dense 3D modeling of indoor environ-
ments. Int. Journ. Robotics Research, 31(5):647–663.
Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C.,
and Burgard, W. (2013). OctoMap: An efficient prob-
abilistic 3D mapping framework based on octrees. Au-
tonomous Robots.
Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe,
R., Kohli, P., Shotton, J., Hodges, S., Freeman, D.,
Davison, A., and Fitzgibbon, A. (2011). Kinectfusion:
real-time 3D reconstruction and interaction using a
moving depth camera. In Proc. 24th annual ACM
Symp. User interface software and technology, UIST
’11, pages 559–568, New York, NY, USA. ACM.
Khoshelham, K. (2011). Accuracy analysis of kinect depth
data. ISPRS Workshop Laser Scanning, 38:1.
Khoshelham, K. and Elberink, S. O. (2012). Accuracy and
resolution of kinect depth data for indoor mapping ap-
plications. Sensors, 12(2):1437–1454.
Kubacki, D. B., Bui, H. Q., Babacan, S. D., and Do, M. N.
(2012). Registration and integration of multiple depth
images using signed distance function.
K
¨
ummerle, R., Grisetti, G., Strasdat, H., Konolige, K.,
and Burgard, W. (2011). g2o: A general framework
for graph optimization. In Robotics and Automation
(ICRA), IEEE Int. Conf. on, pages 3607–3613. IEEE.
Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D.,
Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges,
S., and Fitzgibbon, A. (2011a). Kinectfusion: Real-
time dense surface mapping and tracking. In Proceed-
ing of 10th IEEE International Symposium on Mixed
and Augmented Reality, ISMAR ’11, pages 127–136,
Washington, DC, USA. IEEE Computer Society.
Newcombe, R. A., Lovegrove, S., and Davison, A. J.
(2011b). Dtam: Dense tracking and mapping in real-
time. In Metaxas, D. N., Quan, L., Sanfeliu, A., and
Gool, L. J. V., editors, ICCV, pages 2320–2327. IEEE.
PCL (2011). Kinectfusion implementation in the PCL.
http://svn.pointclouds.org/pcl/trunk/.
Ren, C. Y. and Reid, I. (2012). A unified energy minimiza-
tion framework for model fitting in depth. In Com-
puter Vision–ECCV 2012. Workshops and Demonstra-
tions, pages 72–82. Springer.
Rusu, R. B. and Cousins, S. (2011). 3D is here: Point Cloud
Library (PCL). In IEEE International Conference on
Robotics and Automation (ICRA), Shanghai, China.
Sturm, J., Engelhard, N., Endres, F., Burgard, W., and Cre-
mers, D. (2012). A benchmark for the evaluation of
rgb-d slam systems. In Intelligent Robots and Systems
(IROS), IEEE/RSJ Int. Conf. on, pages 573–580.
Whelan, T., Johannsson, H., Kaess, M., Leonard, J. J., and
McDonald, J. (2013). Robust real-time visual odom-
etry for dense rgb-d mapping. In IEEE International
Conference on Robotics and Automation, ICRA.
Whelan, T., Kaess, M., Fallon, M., Johannsson, H.,
Leonard, J., and McDonald, J. (2012). Kintinuous:
Spatially extended KinectFusion. In RSS Workshop on
RGB-D: Advanced Reasoning with Depth Cameras,
Sydney, Australia.
Zeng, M., Zhao, F., Zheng, J., and Liu, X. (2012). A
memory-efficient kinectfusion using octree. In Pro-
ceedings of the First international conference on
Computational Visual Media, CVM’12, pages 234–
241, Berlin, Heidelberg. Springer-Verlag.
ImprovedICP-basedPoseEstimationbyDistance-aware3DMapping
367