Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects: The Terpsichore Approach

Anastasios Doulamis, Athanasios Voulodimos, Nikolaos Doulamis, Sofia Soile, Anastasios Lampropoulos

2017

Abstract

Intangible Cultural Heritage is a mainspring of cultural diversity and as such it should be a focal point in cultural heritage preservation and safeguarding endeavours. Nevertheless, although significant progress has been made in digitization technology as regards tangible cultural assets and especially in the area of 3D reconstruction, the e-documentation of intangible cultural heritage has not seen comparable progress. One of the main reasons associated lies in the significant challenges involved in the systematic e-digitisation of intangible cultural assets, such as performing arts. In this paper, we present at a high-level an approach for transforming intangible cultural assets, namely folk dances, into tangible choreographic digital objects. The approach is being implemented in the context of the H2020 European project “Terpsichore”.

References

  1. Alvarez, L., R. Deriche, J. Sánchez, and J. Weickert, 2002. “Dense Disparity Map Estimation Respecting Image Discontinuities: A PDE and Scale-Space Based Approach,” Journal of Visual Communication and Image Representation, vol. 13, no. 1-2, pp. 3-21, 2002.
  2. Andújar, C., J. Boo, P. Brunet, M. Fairén, I. Navazo, P. Vázquez, and À. Vinacua, 2007. “Omni-directional Relief Impostors,” Computer Graphics Forum, vol. 26, no. 3, pp. 553-560, 2007.
  3. Aristidou A. & Y. Chrysanthou, “Feature extraction for human motion indexing of acted dance performances.” In Proceedings of the 9th International Conference on Computer Graphics Theory and Applications, 2014 (pp. 277-287).
  4. Aristidou, A., E. Stavrakis, & Y. Chrysanthou, “LMABased Motion Retrieval for Folk Dance Cultural Heritage.” In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection, 2014, (pp. 207-216). Springer International Pub
  5. Bouchard D. and N. Badler, “Semantic Segmentation of Motion Capture Using Laban Movement Analysis,” in Proceedings of the 7th international conference on Intelligent Virtual Agents, Berlin, Heidelberg, 2007, pp. 37-44.
  6. Carceroni, R.L. and K. N. Kutalakos, 2001. “Multi-view scene capture by surfel sampling: from video streams to non-rigid 3D motion, shape and reflectance,” in Eighth IEEE International Conference on Computer Vision, 2001. ICCV 2001. Proceedings, 2001, vol. 2, pp. 60 - 67 vol.2.
  7. Chen, Y.-L. and J. Chai, 2010. “3D Reconstruction of Human Motion and Skeleton from Uncalibrated Monocular Video,” in Computer Vision - ACCV 2009, H. Zha, R. Taniguchi, and S. Maybank, Eds. Springer Berlin Heidelberg, pp. 71-82.
  8. Chi, D. M. Costa, L. Zhao, and N. Badler, “The EMOTE model for effort and shape,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, New York, NY, USA, 2000, pp. 173-182.
  9. Cui, Y., S. Schuon, D. Chan, S. Thrun, and C. Theobalt, 2010. “3D shape scanning with a time-of-flight camera,” in 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 1173 -1180.
  10. Davis, E., 2001. “BEYOND DANCE Laban's Legacy of Movement Analysis,” Brechin Books, 2001
  11. Dimitropoulos, K., P. Barmpoutis, A. Kitsikidis, and N. Grammalidis, 2016. "Extracting dynamics from multidimensional time-evolving data using a bag of higherorder Linear Dynamical Systems", in Proc. 11th International Conference on Computer Vision Theory and Applications (VISAPP 2016), Rome, Italy.
  12. Dobbyn, S., J. Hamill, K. O'Conor, and C. O'Sullivan, 2005. “Geopostors: a real-time geometry / impostor crowd rendering system,” in Proceedings of the 2005 symposium on Interactive 3D graphics and games, New York, NY, USA, 2005, pp. 95-102.
  13. Furukawa, Y. and J. Ponce, 2010. “Dense 3D Motion Capture from Synchronized Video Streams,” in Image and Geometry Processing for 3-D Cinematography, R. Ronfard and G. Taubin, Eds. Springer Berlin Heidelberg, 2010, pp. 193-211.
  14. Gall, J, C. Stoll, E. de Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel, 2009. “Motion capture using joint skeleton tracking and surface estimation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, 2009, pp. 1746 -1753.
  15. Horn B. K. P. and B. G. Schunck, 1981. “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203.
  16. Izadi, S., D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, 2011. “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, New York, NY, USA, 2011, pp. 559-568.
  17. Kahol, K., P. Tripathi, and S. Panchanathan, “Automated gesture segmentation from dance sequences,” in Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings, 2004, pp. 883 - 888.
  18. Kavan, L., S. Dobbyn, S. Collins, J. \vZára, and C. O'Sullivan, 2008. “Polypostors: 2D polygonal impostors for 3D crowds,” in Proceedings of the 2008 symposium on Interactive 3D graphics and games, New York, NY, USA, pp. 149-155.
  19. Kyriakaki, G., Doulamis, A., Doulamis, N., Ioannides, M., Makantasis, K., Protopapadakis, E., Hadjiprocopis, A., Wenzel, K., Fritsch, D., Klein, M., Weinlinger, G., 2014. "4D Reconstruction of Tangible Cultural Heritage Objects from web-retrieved Images," International Journal of Heritage in Digital Era, vol. 3, no. 2, pp. 431-452.
  20. Laumond, and D. Thalmann, 2006. “Real-time navigating crowds: scalable simulation and rendering,” Computer Animation and Virtual Worlds, vol. 17, no. 3-4, pp. 445-455, 2006.
  21. Li, R., T. Luo, and H. Zha, 2010. “3D Digitization and Its Applications in Cultural Heritage,” in Digital Heritage, M. Ioannides, D. Fellner, A. Georgopoulos, and D. G. Hadjimitsis, Eds. Springer Berlin Heidelberg, pp. 381- 388.
  22. Li, R. and S. Sclaroff, 2008. “Multi-scale 3D scene flow from binocular stereo sequences,” Computer Vision and Image Understanding, vol. 110, no. 1, pp. 75-90, 2008.
  23. Linaza, M, Kieran Moran, Noel E. O'Connor, 2013. "Traditional Sports and Games: A New Opportunity for Personalized Access to Cultural Heritage", 6th International Workshop on Personalized Access to Cultural Heritage (PATCH 2013), Rome, Italy.
  24. Matsuyama, T., X. Wu, T. Takai, and S. Nobuhara, 2004. “Real-time 3D shape reconstruction, dynamic 3D mesh deformation, and high fidelity visualization for 3D video,” Computer Vision and Image Understanding, vol. 96, no. 3, pp. 393-434.
  25. Menier, C., E. Boyer, and B. Raffin, 2006. “3D SkeletonBased Body Pose Recovery,” in Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06), Washington, DC, USA, 2006, pp. 389-396.
  26. Mitra, N.J, A. Nguyen, and L. Guibas, 2014. “Estimating surface normal in noisy point cloud data,” International Journal of Computational Geometry & Applications, vol. 14, no. 04n05, pp. 261-276.
  27. Moon, K. and V. Pavlovic, “3D Human Motion Tracking Using Dynamic Probabilistic Latent Semantic Analysis,” in Canadian Conference on Computer and Robot Vision, 2008. CRV 7808, 2008, pp. 155 -162.
  28. Pettré, P. de H. Ciechomski, J. Maïm, B. Yersin, J.-P. Laumond, and D. Thalmann, 2006. “Real-time navigating crowds: scalable simulation and rendering,” Computer Animation and Virtual Worlds, vol. 17, no. 3-4, pp. 445-455.
  29. Pforsich, J., 1977. “Handbook for Laban Movement Analysis", New York: Janis Pforsich, 1977.
  30. Pons, J.-P., R. Keriven, and O. Faugeras, 2006. “MultiView Stereo Reconstruction and Scene Flow Estimation with a Global Image-Based Matching Score,” International Journal of Computer Vision, vol. 72, no. 2, pp. 179-193, Jul. 2006.
  31. Ruhnke, M., R. Kummerle, G. Grisetti, and W. Burgard, 2012. “Highly accurate 3D surface models by sparse surface adjustment,” in 2012 IEEE International Conference on Robotics and Automation (ICRA), 2012, pp. 751 -757.
  32. Rusu, R. B., N. Blodow, and M. Beetz, 2009. “Fast Point Feature Histograms (FPFH) for 3D registration,” in IEEE International Conference on Robotics and Automation, 2009. ICRA 7809, 2009, pp. 3212 -3217.
  33. Santos L. and J. Dias, “Motion Patterns: Signal Interpretation towards the Laban Movement Analysis Semantics,” in Technological Innovation for Sustainability, vol. 349, L. M. Camarinha-Matos, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 333-340.
  34. Sakashita, K., Y. Yagi, R. Sagawa, R. Furukawa, and H. Kawasaki, 2011. “A System for Capturing Textured 3D Shapes Based on One-Shot Grid Pattern with Multiband Camera and Infrared Projector,” in 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2011, pp. 49 -56.
  35. Tecchia, F., C. Loscos, and Y. Chrysanthou, “Image-based crowd rendering,” IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 36 -43, Apr. 2002.
  36. Woo, W., J. Park, and Y. Iwadate, “Emotion analysis from dance performance using time delay neural networks,” in proc. of the JCIS-CVPRIP, 2000, pp. 374-377.
  37. Yamasaki, T. and K. Aizawa, 2007. “Motion segmentation and retrieval for 3D video based on modified shape distribution,” EURASIP J. Appl. Signal Process., vol. 2007, no. 1, pp. 211-211, 2007.
  38. Yang, Y., H. Leung, L. Yue, and L. Deng, 2010. “Automatically Constructing a Compact Concept Map of Dance Motion with Motion Captured Data,” in Advances in Web-Based Learning - ICWL 2010, vol. 6483.
Download


Paper Citation


in Harvard Style

Doulamis A., Voulodimos A., Doulamis N., Soile S. and Lampropoulos A. (2017). Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects: The Terpsichore Approach . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: CVICG4CULT, ISBN 978-989-758-226-4, pages 451-460. DOI: 10.5220/0006347304510460


in Bibtex Style

@conference{cvicg4cult17,
author={Anastasios Doulamis and Athanasios Voulodimos and Nikolaos Doulamis and Sofia Soile and Anastasios Lampropoulos},
title={Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects: The Terpsichore Approach},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: CVICG4CULT,},
year={2017},
pages={451-460},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006347304510460},
isbn={978-989-758-226-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: CVICG4CULT,
TI - Transforming Intangible Folkloric Performing Arts into Tangible Choreographic Digital Objects: The Terpsichore Approach
SN - 978-989-758-226-4
AU - Doulamis A.
AU - Voulodimos A.
AU - Doulamis N.
AU - Soile S.
AU - Lampropoulos A.
PY - 2017
SP - 451
EP - 460
DO - 10.5220/0006347304510460