Authors:
María José Gómez-Silva
;
José María Armingol
and
Arturo de la Escalera
Affiliation:
Universidad Carlos III de Madrid, Spain
Keyword(s):
Deep Learning, Convolutional Neural Network, Mahalanobis Distance, Person Re-Identification.
Related
Ontology
Subjects/Areas/Topics:
Computer Vision, Visualization and Computer Graphics
;
Motion, Tracking and Stereo Vision
;
Tracking and Visual Navigation
;
Video Surveillance and Event Detection
Abstract:
Measuring the appearance similarity in Person Re-Identification is a challenging task which not only requires
the selection of discriminative visual descriptors but also their optimal combination. This paper presents a
unified learning framework composed by Deep Convolutional Neural Networks to simultaneously and automatically
learn the most salient features for each one of nine different body parts and their best weighting to
form a person descriptor. Moreover, to cope with the cross-view variations, these have been coded in a Mahalanobis
Matrix, in an adaptive process, also integrated into the learning framework, which takes advantage
of the discriminative information given by the dataset labels to analyse the data structure. The effectiveness
of the proposed approach, named Deep Parts Similarity Learning (DPSL), has been evaluated and compared
with other state-of-the-art approaches over the challenging PRID2011 dataset.