Authors:
Osian Haines
;
David Bull
and
J. F. Burn
Affiliation:
University of Bristol, United Kingdom
Keyword(s):
Vision Guided Locomotion, Segmentation, Image Interpretation, Scene Understanding, Inertial Sensors, Oculus Rift, Mobile Robotics.
Related
Ontology
Subjects/Areas/Topics:
Applications
;
Color and Texture Analyses
;
Computer Vision, Visualization and Computer Graphics
;
Image and Video Analysis
;
Pattern Recognition
;
Robotics
;
Segmentation and Grouping
;
Software Engineering
Abstract:
In the context of semantic image segmentation, we show that knowledge of world-centric camera orientation (from an inertial sensor) can be used to improve classification accuracy. This works because certain structural classes (such as the ground) tend to appear in certain positions relative to the viewer. We show that orientation information is useful in conjunction with typical image-based features, and that fusing the two results in substantially better classification accuracy than either alone – we observed an increase from 61% to 71% classification accuracy, over the six classes in our test set, when orientation information was added. The method is applied to segmentation using both points and lines, and we also show that combining points with lines further improves accuracy. This work is done towards our intended goal of visually guided locomotion for either an autonomous robot or human.