Skeleton-based Human Action Recognition - A Learning Method based on Active Joints

Ahmad K. N. Tehrani, Maryam Asadi Aghbolaghi, Shohreh Kasaei

2017

Abstract

A novel method for human action recognition from the sequence of skeletal data is presented in this paper. The proposed method is based on the idea that some of body joints are inactive and do not have any physical meaning during performing an action. In other words, regardless of the subjects that perform an action, for each action only a certain set of joints are meaningfully involved. Consequently, extracting features from inactive joints is a time-consuming task. To cope with this problem, in this paper, only the dynamic of active joints is modeled. To consider the local temporal information, a sliding window is used to divide the trajectory of active joints into some consecutive windows. Feature extraction is then applied on all windows of active joints’ trajectories and then by using the K-means clustering all features are quantized. Since each action has its own active joints, in this paper one-vs-all classification strategy is exploited. Finally, to take into account the global motion information, the consecutive quantized features of the samples of an action are fed into the hidden Markov model (HMM) of that action. The experimental results show that using active joints can get 96% of maximum reachable accuracy from using all joints.

References

  1. Boubou, S. and Suzuki, E. (2015). Classifying actions based on histogram of oriented velocity vectors. Journal of Intelligent Information Systems, 44(1):49-65.
  2. Devanne, M., Wannous, H., Berretti, S., Pala, P., Daoudi, M., and Del Bimbo, A. (2013). Space-time pose representation for 3d human action recognition. In International Conference on Image Analysis and Processing, pages 456-464. Springer.
  3. Eweiwi, A., Cheema, M. S., Bauckhage, C., and Gall, J. (2014). Efficient pose-based action recognition. In Asian Conference on Computer Vision, pages 428- 443. Springer.
  4. F. De la Torre, J. Hodgins, J. M. S. V. R. F. and Macey., J. (2009). Tech. report cmu-ri-tr-08-22. Technical report, Robotics Institute, Carnegie Mellon University.
  5. Gu, Y., Do, H., Ou, Y., and Sheng, W. (2012). Human gesture recognition through a kinect sensor. In Robotics and Biomimetics (ROBIO), 2012 IEEE International Conference on, pages 1379-1384. IEEE.
  6. Hussein, M. E., Torki, M., Gowayyed, M. A., and El-Saban, M. (2013). Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In IJCAI, volume 13, pages 2466-2472.
  7. Junejo, I. N., Dexter, E., Laptev, I., and Perez, P. (2011). View-independent action recognition from temporal self-similarities. IEEE transactions on pattern analysis and machine intelligence, 33(1):172-185.
  8. Li, W., Zhang, Z., and Liu, Z. (2010). Action recognition based on a bag of 3d points. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pages 9-14. IEEE.
  9. Liu, Z., Feng, X., and Tian, Y. (2015). An effective view and time-invariant action recognition method based on depth videos. In 2015 Visual Communications and Image Processing (VCIP), pages 1-4. IEEE.
  10. Luo, J., Wang, W., and Qi, H. (2013). Group sparsity and geometry constrained dictionary learning for action recognition from depth maps. In Proceedings of the IEEE International Conference on Computer Vision, pages 1809-1816.
  11. Oreifej, O. and Liu, Z. (2013). Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716-723.
  12. Primer, A., Burrus, C. S., and Gopinath, R. A. (1998). Introduction to wavelets and wavelet transforms.
  13. Rahmani, H., Mahmood, A., Huynh, D. Q., and Mian, A. (2014). Real time action recognition using histograms of depth gradients and random decision forests. In IEEE Winter Conference on Applications of Computer Vision, pages 626-633. IEEE.
  14. Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., and Moore, R. (2013). Real-time human pose recognition in parts from single depth images. Communications of the ACM, 56(1):116-124.
  15. Vemulapalli, R., Arrate, F., and Chellappa, R. (2014). Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 588-595.
  16. Wei, P., Zheng, N., Zhao, Y., and Zhu, S.-C. (2013). Concurrent action detection with structural prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 3136-3143.
  17. Xia, L. and Aggarwal, J. (2013). Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2834-2841.
  18. Xia, L., Chen, C.-C., and Aggarwal, J. (2012). View invariant human action recognition using histograms of 3d joints. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 20-27. IEEE.
  19. Yang, X. and Tian, Y. (2014). Super normal vector for activity recognition using depth sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 804-811.
  20. Yang, X. and Tian, Y. L. (2012). Eigenjoints-based action recognition using naive-bayes-nearest-neighbor. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 14-19. IEEE.
Download


Paper Citation


in Harvard Style

K. N. Tehrani A., Asadi Aghbolaghi M. and Kasaei S. (2017). Skeleton-based Human Action Recognition - A Learning Method based on Active Joints . In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, (VISIGRAPP 2017) ISBN 978-989-758-226-4, pages 303-310. DOI: 10.5220/0006134903030310


in Bibtex Style

@conference{visapp17,
author={Ahmad K. N. Tehrani and Maryam Asadi Aghbolaghi and Shohreh Kasaei},
title={Skeleton-based Human Action Recognition - A Learning Method based on Active Joints},
booktitle={Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, (VISIGRAPP 2017)},
year={2017},
pages={303-310},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0006134903030310},
isbn={978-989-758-226-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, (VISIGRAPP 2017)
TI - Skeleton-based Human Action Recognition - A Learning Method based on Active Joints
SN - 978-989-758-226-4
AU - K. N. Tehrani A.
AU - Asadi Aghbolaghi M.
AU - Kasaei S.
PY - 2017
SP - 303
EP - 310
DO - 10.5220/0006134903030310