Authors:
Khaja Wasif Mohiuddin
1
;
Jagannadan Varadarajan
1
;
Rémi Emonet
2
;
Jean-Marc Odobez
3
and
Pierre Moulin
4
Affiliations:
1
Advanced Digital Sciences Centre, Singapore
;
2
Jean Monnet University, France
;
3
Idiap Research Institute, Switzerland
;
4
Advanced Digital Sciences Centre and University of Illinois at Urbana-Champaign, Singapore
Keyword(s):
PLSA, PLSM, Activity Analysis, Topic Models, GPU, CUDA, Motifs.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Motion, Tracking and Stereo Vision
;
Optical Flow and Motion Analyses
;
Video Surveillance and Event Detection
Abstract:
In this paper, we present an optimized GPU based implementation of Probabilistic Latent Sequential motifs
(PLSM) that was proposed for sequential pattern mining from video sequences. PLSM mines for recurrent
sequential patterns from documents given as word-time occurrences, and outputs a set of sequential activity
motifs and their starting occurrences. PLSM’s uniqueness comes from modeling the co-occurrence and temporal
order in which the words occur within a temporal window while also dealing with activities which occur
concurrently in the video. However, the expectation-maximization algorithm used in PLSM has a very high
time complexity due to complex nested loops, requiring several dimensionality reduction steps before invoking
PLSM. In order to truly realize the benefits of the model, we propose two GPU based implementations
of PLSM called GPU-pLSM (sparse and dense). The two implementations differ based on whether the entire
word-count matrix (dense) or only the non-zero entries
(sparse) are considered in inferring the latent motifs
respectively. Our implementation achieves an impressive 265X and 366X times speed up for dense and
sparse approaches respectively on NVIDIA GeForce GTX Titan. This speed up enables us to remove several
pre-processing and dimension reduction steps used to generate the input temporal documents and thus apply
PLSM directly on the input documents. We validate our results through qualitative comparisons of the inferred
motifs on two different publicly available datasets. Quantitative comparison on document reconstruction
based abnormality measure show that both GPU-PLSM and PLSA+PLSM are strongly correlated.
(More)