Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?

Oliver Wasenmüller, Benjamin Schenkenberger, Didier Stricker

2017

Abstract

Human body reconstruction is a very active field in recent Computer Vision research. The challenge is the moving human body while capturing, even when trying to avoid that. Thus, algorithms which explicitly cope with non-rigid movements are indispensable. In this paper, we propose a novel algorithm to extend existing rigid RGB-D reconstruction pipelines to handle non-rigid transformations. The idea is to store in addition to the model also the non-rigid transformation nrt of the current frame as a sparse warp field in the image space. We propose an algorithm to incrementally update this transformation nrt. In the evaluation we show that the novel algorithm provides accurate reconstructions and can cope with non-rigid movements of up to 5cm.

References

  1. Aitpayev, K. and Gaber, J. (2012). Creation of 3d human avatar using kinect. Asian Transactions on Fundamentals of Electronics, Communication & Multimedia.
  2. Besl, P. J. and McKay, N. D. (1992). Method for registration of 3-d shapes. In Robotics-DL tentative.
  3. Brown, B. J. and Rusinkiewicz, S. (2007). Global nonrigid alignment of 3-d scans. In ACM Transactions on Graphics (TOG).
  4. Cui, Y., Schuon, S., Thrun, S., Stricker, D., and Theobalt, C. (2013). Algorithms for 3D shape scanning with a depth camera. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
  5. Dou, M., Fuchs, H., and Frahm, J.-M. (2013). Scanning and tracking dynamic objects with commodity depth cameras. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE.
  6. Fischler, M. A. and Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM.
  7. Fürsattel, P., Placht, S., Balda, M., Schaller, C., Hofmann, H., Maier, A., and Riess, C. (2016). A comparative error analysis of current time-of-flight sensors. IEEE Transactions on Computational Imaging.
  8. Hauswiesner, S., Straka, M., and Reitmayr, G. (2011). Free viewpoint virtual try-on with commodity depth cameras. In International Conference on Virtual Reality Continuum and Its Applications in Industry. ACM.
  9. Hornacek, M., Fitzgibbon, A., and Rother, C. (2014). Sphereflow: 6 dof scene flow from rgb-d pairs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  10. Innmann, M., Zollhöfer, M., Nießner, M., Theobalt, C., and Stamminger, M. (2016). VolumeDeform: Realtime volumetric non-rigid reconstruction. In European Conference on Computer Vision (ECCV).
  11. Jaimez, M., Souiai, M., Gonzalez-Jimenez, J., and Cremers, D. (2015). A primal-dual framework for real-time dense rgb-d scene flow. In IEEE International Conference on Robotics and Automation (ICRA).
  12. Kerl, C., Sturm, J., and Cremers, D. (2013). Dense visual slam for rgb-d cameras. In International Conference on Intelligent Robot Systems (IROS).
  13. Kopf, J., Cohen, M. F., Lischinski, D., and Uyttendaele, M. (2007). Joint bilateral upsampling. In ACM Transactions on Graphics (TOG). ACM.
  14. Li, H., Vouga, E., Gudym, A., Luo, L., Barron, J. T., and Gusev, G. (2013). 3d self-portraits. ACM Transactions on Graphics (TOG).
  15. Newcombe, R. A., Fox, D., and Seitz, S. M. (2015). DynamicFusion: Reconstruction and tracking of nonrigid scenes in real-time. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  16. Newcombe, R. A., Izadi, S., Hilliges, O., Kim, D., Davison, A. J., Kohi, P., Shotton, J., Hodges, S., and Fitzgibbon, A. (2011). KinectFusion: Real-time dense surface mapping and tracking. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
  17. Sorkine, O. and Alexa, M. (2007). As-rigid-as-possible surface modeling. In Symposium on Geometry Processing (SGP).
  18. Wasenmüller, O., Ansari, M. D., and Stricker, D. (2016a). DNA-SLAM: Dense Noise Aware SLAM for ToF RGB-D Cameras. In Asian Conference on Computer Vision Workshop (ACCV workshop). Springer.
  19. Wasenmüller, O., Meyer, M., and Stricker, D. (2016b). Augmented Reality 3D Discrepancy Check in Industrial Applications. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE.
  20. Wasenmüller, O., Meyer, M., and Stricker, D. (2016c). CoRBS: Comprehensive RGB-D Benchmark for SLAM using Kinect v2. In IEEE Winter Conference on Applications of Computer Vision (WACV).
  21. Wasenmüller, O., Peters, J. C., Golyanik, V., and Stricker, D. (2015). Precise and Automatic Anthropometric Measurement Extraction using Template Registration. In International Conference on 3D Body Scanning Technologies (3DBST).
  22. Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., and McDonald, J. (2012). Kintinuous: Spatially extended KinectFusion. In RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras.
  23. Zhang, Q., Fu, B., Ye, M., and Yang, R. (2014). Quality dynamic human body modeling using a single low-cost depth camera. In 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE.
Download


Paper Citation


in Harvard Style

Wasenmüller O., Schenkenberger B. and Stricker D. (2017). Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements? . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-227-1, pages 294-299. DOI: 10.5220/0006172402940299


in Bibtex Style

@conference{visapp17,
author={Oliver Wasenmüller and Benjamin Schenkenberger and Didier Stricker},
title={Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={294-299},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006172402940299},
isbn={978-989-758-227-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 6: VISAPP, (VISIGRAPP 2017)
TI - Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?
SN - 978-989-758-227-1
AU - Wasenmüller O.
AU - Schenkenberger B.
AU - Stricker D.
PY - 2017
SP - 294
EP - 299
DO - 10.5220/0006172402940299