Authors:
Nyan Bo Bo
;
Peter Veelaert
and
Wilfried Philips
Affiliation:
imec-IPI-UGent, Belgium
Keyword(s):
Multi-camera Tracking, Data Fusion, Occlusion Handling, Uncertainty Assessment, Decentralized Computing.
Related
Ontology
Subjects/Areas/Topics:
Applications and Services
;
Camera Networks and Vision
;
Computer Vision, Visualization and Computer Graphics
;
Motion, Tracking and Stereo Vision
;
Tracking and Visual Navigation
;
Video Surveillance and Event Detection
Abstract:
In single view visual target tracking, an occlusion is one of the most challenging problems since target’s features are partially/fully covered by other targets as occlusion occurred. Instead of a limited single view, a target can be observed from multiple viewpoints using a network of cameras to mitigate the occlusion problem. However, information coming from different views must be fused by relying less on views with heavy occlusion and relying more on views with no/small occlusion. To address this need, we proposed a new fusion method which fuses the locally estimated positions of a person by the smart cameras observing from different viewpoints while taking into account the occlusion in each view. The genericity and scalability of the proposed fusion method is high since it needs only the position estimates from the smart cameras. Uncertainty for each local estimate is locally computed in a fusion center from the simulated occlusion assessment based on the camera’s projective geo
metry. These uncertainties together with the local estimates are used to model the probabilistic distributions required for the Bayesian fusion of the local estimates. The performance evaluation on three challenging video sequences shows that our method achieves higher accuracy than the local estimates as well as the tracking results using a classical triangulation method. Our method outperforms two state-ofthe-art trackers on a publicly available multi-camera video sequence.
(More)