Authors:
Guang Chen
1
;
Daniel Clarke
2
and
Alois Knoll
3
Affiliations:
1
Technische Universität München and An-Institut Technische Universität München, Germany
;
2
An-Institut Technische Universität München, Germany
;
3
Technische Universität München, Germany
Keyword(s):
Unsupervised Learning, Weighted Joint-based Features, Action Recognition, Depth Video Data.
Related
Ontology
Subjects/Areas/Topics:
Applications and Services
;
Computer Vision, Visualization and Computer Graphics
;
Enterprise Information Systems
;
Entertainment Imaging Applications
;
Human and Computer Interaction
;
Human-Computer Interaction
Abstract:
Human action recognition based on joints is a challenging task. The 3D positions of the tracked joints are very noisy if occlusions occur, which increases the intra-class variations in the actions. In this paper, we propose a novel approach to recognize human actions with weighted joint-based features. Previous work has focused on hand-tuned joint-based features, which are difficult and time-consuming to be extended to other modalities. In contrast, we compute the joint-based features using an unsupervised learning approach. To capture the intra-class variance, a multiple kernel learning approach is employed to learn the skeleton structure that combine these joints-base features. We test our algorithm on action application using Microsoft Research Action3D (MSRAction3D) dataset. Experimental evaluation shows that the proposed approach outperforms state-of-the art action recognition algorithms on depth videos.