Authors:
Yassine Benabbas
;
Samir Amir
;
Adel Lablack
and
Chabane Djeraba
Affiliation:
University of Lille1, TELECOM Lille1 and IRCICA, France
Keyword(s):
Human action recognition, Motion analysis, Video understanding.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Feature Extraction
;
Features Extraction
;
Human-Computer Interaction
;
Image and Video Analysis
;
Informatics in Control, Automation and Robotics
;
Methodologies and Methods
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Physiological Computing Systems
;
Signal Processing, Sensors, Systems Modeling and Control
;
Software Engineering
;
Video Analysis
Abstract:
This paper proposes an approach that uses direction and magnitude models to perform human action recognition from videos captured using monocular cameras. A mixture distribution is computed over the motion orientations and magnitudes of optical flow vectors at each spatial location of the video sequence. This mixture is estimated using an online k-means clustering algorithm. Thus, a sequence model which is composed of a direction model and a magnitude model is created by circular and non-circular clustering. Human actions
are recognized via a metric based on the Bhattacharyya distance that compares the model of a query sequence with the models created from the training sequences. The proposed approach is validated using two public datasets in both indoor and outdoor environments with low and high resolution videos.