Coupling Camera-tracked Humans with a Simulated Virtual Crowd

Jorge Ivan Rivalcoba Rivas, Oriam De Gyves, Isaac Rudomín, Nuria Pelechano

2014

Abstract

Our objective with this paper is to show how we can couple a group of real people and a simulated crowd of virtual humans. We attach group behaviors to the simulated humans to get a plausible reaction to real people. We use a two stage system: in the first stage, a group of people are segmented from a live video, then a human detector algorithm extracts the positions of the people in the video, which are finally used to feed the second stage, the simulation system. The positions obtained by this process allow the second module to render the real humans as avatars in the scene, while the behavior of additional virtual humans is determined by using a simulation based on a social forces model. Developing the method required three specific contributions: a GPU implementation of the codebook algorithm that includes an auxiliary codebook to improve the background subtraction against illumination changes; the use of semantic local binary patterns as a human descriptor; the parallelization of a social forces model, in which we solve a case of agents merging with each other. The experimental results show how a large virtual crowd reacts to over a dozen humans in a real environment.

References

  1. Ahonen, T. (2006). Face description with local binary patterns: Application to face recognition. Pattern Analysis and Machine Intelligence, 28(12):2037-41.
  2. Banerjee, P. and Sengupta, S. (2008). Human motion detection and tracking for video surveillance. Proceedings of the national Conference of tracking and video surveillance activity analysis, pages 88-92.
  3. Bhuvaneswari, K. and Rauf, H. A. (2009). Edgelet based human detection and tracking by combined segmentation and soft decision. Control, Automation, Communication and Energy Conservation, (June):4-9.
  4. Bleiweiss, A. (2009). Multi agent navigation on GPU. White paper, GDC.
  5. Dalal, N. and Triggs, B. (2005). Histograms of Oriented Gradients for Human Detection. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), 1:886-893.
  6. De Gyves, O., Toledo, L., and Rudomín, I. (2013). Comportamientos en simulación de multitudes : revisión del estado del arte. Research in Computer Science., 62(Special Issue: Avances en Inteligencia Artificial.):319-334.
  7. Fiorini, P. and Shiller, Z. (1998). Motion Planning in Dynamic Environments Using Velocity Obstacles. The International Journal of Robotics Research, 17(7):760-772.
  8. Helbing, D., Farkas, I., and Vicsek, T. (2000). Simulating dynamical features of escape panic. Nature, 407(6803):487-90.
  9. Helbing, D., Farkas, I. J., Molnár, P., and Vicsek, T. (2002). Simulation of Pedestrian Crowds in Normal and Evacuation Situations. Pedestrian and evacuation dynamics, 21.
  10. Helbing, D. and Molnár, P. (1995). Social force model for pedestrian dynamics. Physical Review E, 51(5):4282- 4286.
  11. Huang, T. (2008). Discriminative local binary patterns for human detection in personal album. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE.
  12. Kim, K., Chalidabhongse, T. H., Harwood, D., and Davis, L. (2005). Real-time foreground-background segmentation using codebook model. Real-Time Imaging, 11(3):172-185.
  13. Lee, K., Choi, M., Hong, Q., and Lee, J. (2007). Group behavior from video: a data-driven approach to crowd simulation. In Proceedings of the 2007 ACM . . . , pages 109-118, San Diego, California.
  14. Lengvenis, P., Simutis, R., Vaitkus, V., and Maskeliunas, R. (2013). Application Of Computer Vision Systems For Passenger Counting In Public Transport. Electronics and Electrical Engineering, 19(3):69-72.
  15. Lerner, A. (2007). Crowds by example. Computer Graphics Forum, 26(3):655-664.
  16. Li, M., Zhang, Z., Huang, K., and Tan, T. (2009). Rapid and robust human detection and tracking based on omega-shape features. In 2009 16th IEEE International Conference on Image Processing (ICIP), pages 2545-2548. IEEE.
  17. Li, T.-Y., wen Lin, J., Liu, Y.-L., and ming Hsu, C. (2002). Interactively Directing Virtual Crowds in a Virtual Environment. Conf Artif Real Telexistence, 10.
  18. Millan, E., Hernandez, B., and Rudomin, I. (2006). Large Crowds of Autonomous Animated Characters Using Fragment Shaders and Level of Detail. In Wolfgang Engel, editor, ShaderX5: Advanced Rendering Techniques, chapter Beyond Pix, pages 501--510. Charles River Media.
  19. Moussaïd, M., Perozo, N., Garnier, S., Helbing, D., and Theraulaz, G. (2010). The walking behaviour of pedestrian social groups and its impact on crowd dynamics. PLoS One, 5(4):e10047.
  20. Mukherjee, S. and Das, K. (2013). Omega Model for Human Detection and Counting for application in Smart Surveillance System. arXiv preprint arXiv:1303.0633, 4(2):167-172.
  21. Musse, S. R., Jung, C. R., Jacques, J. C. S., and Braun, A. (2007). Using computer vision to simulate the motion of virtual agents. Computer Animation and Virtual Worlds, 18(2):83-93.
  22. Ojala, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Analysis and Machine Intelligence, 24(7):971-987.
  23. Ondr?ej, J., Pettré, J., Olivier, A.-H., and Donikian, S. (2010). A synthetic-vision based steering approach for crowd simulation. ACM Transactions on Graphics, 29(4):1.
  24. Ozturk, O., Yamasaki, T., and Aizawa, K. (2009). Tracking of humans and estimation of body/head orientation from top-view single camera for visual focus of attention analysis. Computer Vision Workshops (ICCV Workshops), pages 1020-1027.
  25. Pelechano, N. and Stocker, C. (2008). Being a part of the crowd: towards validating VR crowds using presence. Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems, (Aamas):12-16.
  26. Ren, Z., Gai, W., Zhong, F., Pettré, J., and Peng, Q. (2013). Inserting virtual pedestrians into pedestrian groups video with behavior consistency. The Visual Computer.
  27. Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Computer Graphics, 21(4):25-34.
  28. Rivalcoba, I. J. and Rudomin, I. (2013). Segmentaci ón de peatones a partir de vistas aéreas. In Research in Computing Science, volume 62, pages 129-230.
  29. Sun, L. and Qin, W. (2011). A Data-Driven Approach for Simulating Pedestrian Collision Avoidance in Crossroads. 2011 Workshop on Digital Media and Digital Content Management, pages 83-85.
  30. Treuille, A., Cooper, S., and Popovic, Z. (2006). Continuum crowds. ACM Transactions on Graphics, 25(3):1160.
  31. Tuzel, O., Porikli, F., and Meer, P. (2007). Human Detection via Classification on Riemannian Manifolds. 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8.
  32. Van den Berg, J., Guy, S. J., Lin, M., and Manocha, D. (2011). Reciprocal n-body collision avoidance. Robotics Research, 70:3-19.
  33. Van den Berg, J. and Manocha, D. (2008). Reciprocal Velocity Obstacles for real-time multi-agent navigation. In 2008 IEEE International Conference on Robotics and Automation, pages 1928-1935. IEEE.
  34. Viola, P. and Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 1:I-511-I-518.
  35. Viola, P., Jones, M., and Snow, D. (2003). Detecting pedestrians using patterns of motion and appearance. International Conference on Computer Vision, 63(2):153- 161.
  36. Wang, X. and Sun, S. (2008). Data-Driven Macroscopic Crowd Animation Synthesis Method using Velocity Fields. 2008 International Symposium on Computational Intelligence and Design, pages 157-160.
  37. Wang, Y., Dubey, R., Magnenat-Thalmann, N., and Thalmann, D. (2012). An immersive multi-agent system for interactive applications. The Visual Computer, 29(5):323-332.
Download


Paper Citation


in Harvard Style

Rivalcoba Rivas J., De Gyves O., Rudomín I. and Pelechano N. (2014). Coupling Camera-tracked Humans with a Simulated Virtual Crowd . In Proceedings of the 9th International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2014) ISBN 978-989-758-002-4, pages 312-321. DOI: 10.5220/0004694403120321


in Bibtex Style

@conference{grapp14,
author={Jorge Ivan Rivalcoba Rivas and Oriam De Gyves and Isaac Rudomín and Nuria Pelechano},
title={Coupling Camera-tracked Humans with a Simulated Virtual Crowd},
booktitle={Proceedings of the 9th International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2014)},
year={2014},
pages={312-321},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004694403120321},
isbn={978-989-758-002-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Computer Graphics Theory and Applications - Volume 1: GRAPP, (VISIGRAPP 2014)
TI - Coupling Camera-tracked Humans with a Simulated Virtual Crowd
SN - 978-989-758-002-4
AU - Rivalcoba Rivas J.
AU - De Gyves O.
AU - Rudomín I.
AU - Pelechano N.
PY - 2014
SP - 312
EP - 321
DO - 10.5220/0004694403120321