Authors:
Sion Hannuna
1
;
Xianghua Xie
2
;
Majid Mirmehdi
1
and
Neill Campbell
1
Affiliations:
1
Department of Computer Science, University of Bristol, United Kingdom
;
2
Department of Computer Science, University of Wales Swansea, United Kingdom
Keyword(s):
Uncategorised object detection, Stereo depth, Assisted blind navigation, Sparse optical flow.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Computer Vision, Visualization and Computer Graphics
;
Human-Computer Interaction
;
Image and Video Analysis
;
Methodologies and Methods
;
Motion and Tracking
;
Motion, Tracking and Stereo Vision
;
Pattern Recognition
;
Physiological Computing Systems
;
Software Engineering
;
Video Analysis
Abstract:
We propose a robust approach to annotating independently moving objects captured by head mounted stereo cameras that are worn by an ambulatory (and visually impaired) user. Initially, sparse optical flow is extracted from a single image stream, in tandem with dense depth maps. Then, using the assumption that apparent movement generated by camera egomotion is dominant, flow corresponding to independently moving objects (IMOs) is robustly segmented using MLESAC. Next, the mode depth of the feature points defining this flow (the foreground) are obtained by aligning them with the depth maps. Finally, a bounding box is scaled proportionally to this mode depth and robustly fit to the foreground points such that the number of inliers is maximised.