explicitly estimating and considering the non-rigid
transformation nrt of the scene. We proposed a novel
image based sparse warp field to compute, store and
apply this transformation efficiently. In the evaluation
we showed that the reconstruction achieves state-of-
the-art accuracy for rigid scenes and is able to recon-
struct non-rigid scene with up to 5cm movement.
ACKNOWLEDGEMENTS
This work was partially funded by the Federal Min-
istry of Education and Research (Germany) in the
context of the Software Campus in the project Body
Analyzer. We thank the Video Analytics Austria Re-
searchgroup (CT RTC ICV VIA-AT) of Siemens – es-
pecially Michael Hornacek and Claudia Windisch –
for the fruitful collaboration.
REFERENCES
Aitpayev, K. and Gaber, J. (2012). Creation of 3d human
avatar using kinect. Asian Transactions on Fundamen-
tals of Electronics, Communication & Multimedia.
Besl, P. J. and McKay, N. D. (1992). Method for registration
of 3-d shapes. In Robotics-DL tentative.
Brown, B. J. and Rusinkiewicz, S. (2007). Global non-
rigid alignment of 3-d scans. In ACM Transactions
on Graphics (TOG).
Cui, Y., Schuon, S., Thrun, S., Stricker, D., and Theobalt,
C. (2013). Algorithms for 3D shape scanning with a
depth camera. IEEE Transactions on Pattern Analysis
and Machine Intelligence (PAMI).
Dou, M., Fuchs, H., and Frahm, J.-M. (2013). Scanning
and tracking dynamic objects with commodity depth
cameras. In IEEE International Symposium on Mixed
and Augmented Reality (ISMAR). IEEE.
Fischler, M. A. and Bolles, R. C. (1981). Random sample
consensus: a paradigm for model fitting with appli-
cations to image analysis and automated cartography.
Communications of the ACM.
F
¨
ursattel, P., Placht, S., Balda, M., Schaller, C., Hofmann,
H., Maier, A., and Riess, C. (2016). A comparative
error analysis of current time-of-flight sensors. IEEE
Transactions on Computational Imaging.
Hauswiesner, S., Straka, M., and Reitmayr, G. (2011). Free
viewpoint virtual try-on with commodity depth cam-
eras. In International Conference on Virtual Reality
Continuum and Its Applications in Industry. ACM.
Hornacek, M., Fitzgibbon, A., and Rother, C. (2014).
Sphereflow: 6 dof scene flow from rgb-d pairs. In
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR).
Innmann, M., Zollh
¨
ofer, M., Nießner, M., Theobalt, C.,
and Stamminger, M. (2016). VolumeDeform: Real-
time volumetric non-rigid reconstruction. In Euro-
pean Conference on Computer Vision (ECCV).
Jaimez, M., Souiai, M., Gonzalez-Jimenez, J., and Cremers,
D. (2015). A primal-dual framework for real-time
dense rgb-d scene flow. In IEEE International Con-
ference on Robotics and Automation (ICRA).
Kerl, C., Sturm, J., and Cremers, D. (2013). Dense visual
slam for rgb-d cameras. In International Conference
on Intelligent Robot Systems (IROS).
Kopf, J., Cohen, M. F., Lischinski, D., and Uyttendaele, M.
(2007). Joint bilateral upsampling. In ACM Transac-
tions on Graphics (TOG). ACM.
Li, H., Vouga, E., Gudym, A., Luo, L., Barron, J. T., and
Gusev, G. (2013). 3d self-portraits. ACM Transactions
on Graphics (TOG).
Newcombe, R. A., Fox, D., and Seitz, S. M. (2015). Dy-
namicFusion: Reconstruction and tracking of non-
rigid scenes in real-time. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR).
Newcombe, R. A., Izadi, S., Hilliges, O., Kim, D., Davison,
A. J., Kohi, P., Shotton, J., Hodges, S., and Fitzgibbon,
A. (2011). KinectFusion: Real-time dense surface
mapping and tracking. In IEEE International Sym-
posium on Mixed and Augmented Reality (ISMAR).
Sorkine, O. and Alexa, M. (2007). As-rigid-as-possible sur-
face modeling. In Symposium on Geometry Process-
ing (SGP).
Wasenm
¨
uller, O., Ansari, M. D., and Stricker, D. (2016a).
DNA-SLAM: Dense Noise Aware SLAM for ToF
RGB-D Cameras. In Asian Conference on Computer
Vision Workshop (ACCV workshop). Springer.
Wasenm
¨
uller, O., Meyer, M., and Stricker, D. (2016b).
Augmented Reality 3D Discrepancy Check in Indus-
trial Applications. In IEEE International Symposium
on Mixed and Augmented Reality (ISMAR). IEEE.
Wasenm
¨
uller, O., Meyer, M., and Stricker, D. (2016c).
CoRBS: Comprehensive RGB-D Benchmark for
SLAM using Kinect v2. In IEEE Winter Conference
on Applications of Computer Vision (WACV).
Wasenm
¨
uller, O., Peters, J. C., Golyanik, V., and Stricker,
D. (2015). Precise and Automatic Anthropometric
Measurement Extraction using Template Registration.
In International Conference on 3D Body Scanning
Technologies (3DBST).
Whelan, T., Kaess, M., Fallon, M., Johannsson, H.,
Leonard, J., and McDonald, J. (2012). Kintinuous:
Spatially extended KinectFusion. In RSS Workshop on
RGB-D: Advanced Reasoning with Depth Cameras.
Zhang, Q., Fu, B., Ye, M., and Yang, R. (2014). Quality dy-
namic human body modeling using a single low-cost
depth camera. In 2014 IEEE Conference on Computer
Vision and Pattern Recognition. IEEE.
Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?
299