Authors:
Guillaume Tatur
1
;
Isabelle Marc
2
;
Gerard Dupeyron
1
and
Michel Dumas
3
Affiliations:
1
Centre Hospitalier Universitaire de Nîmes and Institut ARAMAV, France
;
2
LGI2P and Institut d’Electronique du Sud, France
;
3
Institut d’Electronique du Sud and Institut ARAMAV, France
Keyword(s):
Prosthetic Vision, Mobility, Depth-based Representation.
Related
Ontology
Subjects/Areas/Topics:
Applications and Services
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computer Vision, Visualization and Computer Graphics
;
Cybernetics and User Interface Technologies
;
Devices
;
Human-Computer Interaction
;
Information and Systems Security
;
Medical Image Detection, Acquisition, Analysis and Processing
;
Physiological Computing Systems
Abstract:
Recent advances in visual prostheses raise good hope for enhancement of late blind people performances in daily life tasks. Autonomy in mobility is a major factor of quality of life and ongoing researches aim to develop new image processing for environment representation and try to evaluate mobility performances. We present a novel approach for the generation of a scene representation devoted to mobility tasks, which may complement current prosthetic vision research. In this work, done in collaboration with low vision rehabilitation specialists, depth cues as well as contrast perception are made accessible through a composite representation. After presenting advantages and drawbacks of a scene representation based solely on captured depth or luminosity information, we introduce our method that combines both types of information in a unique representation based on a temporal scanning of depth layers.