Authors:
Anurag Singh
1
;
Chee-Hung Henry Chu
2
and
Michael A. Pratt
1
Affiliations:
1
University of Louisiana at Lafayette, United States
;
2
University of Louisiana at Lafayette and University of Louisiana at Lafayette, United States
Keyword(s):
Video Saliency, Temporal Superpixels, Support Vector Machines, Saliency Flow.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Image and Video Analysis
;
Image Understanding
;
Object Recognition
;
Pattern Recognition
;
Software Engineering
;
Video Analysis
Abstract:
Visual Saliency of a video sequence can be computed by combining spatial and temporal features that
attract a user’s attention to a group of pixels. We present a method that computes video saliency by
integrating these features: color dissimilarity, objectness measure, motion difference, and boundary score.
We use temporal clusters of pixels, or temporal superpixels, to simulate attention associated with a group of
moving pixels in a video sequence. The features are combined using weights learned by a linear support
vector machine in an online fashion. The temporal linkage for superpixels is then used to find the saliency
flow across the image frames. We experimentally demonstrate the efficacy of the proposed method and that
the method has better performance when compared to state-of-the-art methods.