HMM-based Activity Recognition with a Ceiling RGB-D Camera

Daniele Liciotti, Emanuele Frontoni, Primo Zingaretti, Nicola Bellotto, Tom Duckett

2017

Abstract

Automated recognition of Activities of Daily Living allows to identify possible health problems and apply corrective strategies in Ambient Assisted Living (AAL). Activities of Daily Living analysis can provide very useful information for elder care and long-term care services. This paper presents an automated RGB-D video analysis system that recognises human ADLs activities, related to classical daily actions. The main goal is to predict the probability of an analysed subject action. Thus, abnormal behaviour can be detected. The activity detection and recognition is performed using an affordable RGB-D camera. Human activities, despite their unstructured nature, tend to have a natural hierarchical structure; for instance, generally making a coffee involves a three-step process of turning on the coffee machine, putting sugar in cup and opening the fridge for milk. Action sequence recognition is then handled using a discriminative Hidden Markov Model (HMM). RADiaL, a dataset with RGB-D images and 3D position of each person for training as well as evaluating the HMM, has been built and made publicly available.

References

  1. Ali, R., ElHelw, M., Atallah, L., Lo, B., and Yang, G.- Z. (2008). Pattern mining for routine behaviour discovery in pervasive healthcare environments. In 2008 International Conference on Information Technology and Applications in Biomedicine, pages 241- 244. IEEE.
  2. Baum, L. E. (1972). An equality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes. Inequalities, 3:1-8.
  3. Bodor, R., Jackson, B., and Papanikolopoulos, N. (2003). Vision-based human tracking and activity recognition. In Proc. of the 11th Mediterranean Conf. on Control and Automation, volume 1. Citeseer.
  4. Coppola, C., Krajnik, T., Duckett, T., and Bellotto, N. (2016). Learning temporal context for activity recognition. In ECAI 2016: 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands-Including Prestigious Applications of Artificial Intelligence (PAIS 2016), volume 285, page 107. IOS Press.
  5. Coppola, C., Martinez Mozos, O., Bellotto, N., et al. (2015). Applying a 3d qualitative trajectory calculus to human action recognition using depth cameras. In IEEE/RSJ IROS Workshop on Assistance and Service Robotics in a Human Environment.
  6. Dartigues, J. (2005). [methodological problems in clinical and epidemiological research on ageing]. Revue d'épidémiologie et de santé publique, 53(3):243-249.
  7. Gupta, A., Srinivasan, P., Shi, J., and Davis, L. S. (2009). Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 2012- 2019. IEEE.
  8. Hasan, M. and Roy-Chowdhury, A. K. (2015). A continuous learning framework for activity recognition using deep hybrid feature models. IEEE Transactions on Multimedia, 17(11):1909-1922.
  9. Hoai, M. and De la Torre, F. (2014). Max-margin early event detectors. International Journal of Computer Vision, 107(2):191-202.
  10. Jiang, Y. and Saxena, A. (2013). Infinite latent conditional random fields for modeling environments through humans. In Robotics: Science and Systems.
  11. Koppula, H. S., Gupta, R., and Saxena, A. (2013). Learning human activities and object affordances from rgbd videos. The International Journal of Robotics Research, 32(8):951-970.
  12. Liciotti, D., Contigiani, M., Frontoni, E., Mancini, A., Zingaretti, P., and Placidi, V. (2014). Shopper analytics: a customer activity recognition system using a distributed rgb-d camera network. In International Workshop on Video Analytics for Audience Measurement in Retail and Digital Signage, pages 146-157. Springer International Publishing.
  13. Liciotti, D., Massi, G., Frontoni, E., Mancini, A., and Zingaretti, P. (2015). Human activity analysis for in-home fall risk assessment. In 2015 IEEE International Conference on Communication Workshop (ICCW), pages 284-289. IEEE.
  14. Liu, J., Ali, S., and Shah, M. (2008). Recognizing human actions using multiple features. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1-8. IEEE.
  15. Maji, S., Bourdev, L., and Malik, J. (2011). Action recognition from a distributed representation of pose and appearance. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3177- 3184. IEEE.
  16. Nait-Charif, H. and McKenna, S. J. (2004). Activity summarisation and fall detection in a supportive home environment. In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, volume 4, pages 323-326. IEEE.
  17. Ning, H., Han, T. X., Walther, D. B., Liu, M., and Huang, T. S. (2009). Hierarchical space-time model enabling efficient search for human actions. IEEE Transactions on Circuits and Systems for Video Technology, 19(6):808-820.
  18. Oliver, N. and Horvitz, E. (2005). A comparison of hmms and dynamic bayesian networks for recognizing office activities. In International conference on user modeling, pages 199-209. Springer.
  19. Pirsiavash, H. and Ramanan, D. (2012). Detecting activities of daily living in first-person camera views. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2847-2854. IEEE.
  20. Piyathilaka, L. and Kodagoda, S. (2015). Human activity recognition for domestic robots. In Field and Service Robotics, pages 395-408. Springer.
  21. Prest, A., Schmid, C., and Ferrari, V. (2012). Weakly supervised learning of interactions between humans and objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):601-614.
  22. Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286.
  23. Rashidi, P. and Cook, D. J. (2010). An adaptive sensor mining framework for pervasive computing applications. In Knowledge Discovery from Sensor Data, pages 154-174. Springer.
  24. Ryoo, M. S. (2011). Human activity prediction: Early recognition of ongoing activities from streaming videos. In 2011 International Conference on Computer Vision, pages 1036-1043. IEEE.
  25. Sturari, M., Liciotti, D., Pierdicca, R., Frontoni, E., Mancini, A., Contigiani, M., and Zingaretti, P. (2016). Robust and affordable retail customer profiling by vision and radio beacon sensor fusion. Pattern Recognition Letters.
  26. Sung, J., Ponce, C., Selman, B., and Saxena, A. (2011). Human activity detection from rgbd images. plan, activity, and intent recognition, 64.
  27. Wu, J., Osuntogun, A., Choudhury, T., Philipose, M., and Rehg, J. M. (2007). A scalable approach to activity recognition based on object use. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE.
Download


Paper Citation


in Harvard Style

Liciotti D., Frontoni E., Zingaretti P., Bellotto N. and Duckett T. (2017). HMM-based Activity Recognition with a Ceiling RGB-D Camera . In Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM, ISBN 978-989-758-222-6, pages 567-574. DOI: 10.5220/0006202305670574


in Bibtex Style

@conference{icpram17,
author={Daniele Liciotti and Emanuele Frontoni and Primo Zingaretti and Nicola Bellotto and Tom Duckett},
title={HMM-based Activity Recognition with a Ceiling RGB-D Camera},
booktitle={Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,},
year={2017},
pages={567-574},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006202305670574},
isbn={978-989-758-222-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,
TI - HMM-based Activity Recognition with a Ceiling RGB-D Camera
SN - 978-989-758-222-6
AU - Liciotti D.
AU - Frontoni E.
AU - Zingaretti P.
AU - Bellotto N.
AU - Duckett T.
PY - 2017
SP - 567
EP - 574
DO - 10.5220/0006202305670574