composite scene representation based on depth and
luminosity information is presented. This
representation should allow for safer mobility as
well as preserving luminosity contrast perception,
useful to orientation. Orientation and mobility tests
with well sighted subjects wearing head mounted
displays simulating prosthetic vision are underway
in order to evaluate this method and to determine
efficient values for all parameters, notably for the
scanning velocity. These tests should validate the
supposed advantages of the composite
representation. A first statement is that it is a means
to assess the presence and the position of
surrounding obstacles, independently of their
appearances and lighting condition. Scanning as
presented here can help remove possible ambiguities
between obstacles when they are in close proximity
with each other. Moreover, this method can provide
a solution to the classic dilemma between field of
view and acuity: with the scanning method,
transmitting the entire camera field of view should
be possible because thin objects can still be detected.
Finally, according to us, the major advantage of this
technique is the possibility given to the subject to
choose the scanning parameters in relation to his
current actions and expectations. Thus, visual
exploration tasks such as landmarks detection and
mental map establishment could be facilitated. The
condition to the optimal use of this new kind of
representation, particularly for the distinction
between depth and luminosity information, relies on
a complete mental assimilation of the technique
through dedicated training sessions. As a
consequence, using low vision rehabilitation
concepts, one of our future aims is to develop
pertinent learning strategies.
ACKNOWLEDGEMENTS
This research was supported by the French
Federation of the Blind and Visually Impaired
(FAF).
REFERENCES
Bruce V., Green P. (1990). Visual perception: physiology,
psychology and ecology. Lawrence Erlbaum,
Hillsdale, NJ.
Cha, K., Horch, K. W., Normann, R. A. (1992). Mobility
Performance With A Pixelized Vision System. Vision
research, 32, 1367-1372.
Chen, S. C., Suaning, G. J., Morley, J. W., Lovell N. H
(2009). Simulating prosthetic vision: I. Visual models
of phosphenes. Vision research, 49, 1493-1506
Dagnelie, G., Keane, P., Narla, V., Yang, L., Weiland, J.,
& Humayun M. (2007). Real And Virtual Mobility
Performance In Simulated Prosthetic Vision. Journal
of Neural Engineering, 4(1):92-101
Humayun MS (2009).: Preliminary results from Argus II
feasibility study: a 60 electrode epiretinal prosthesis.
Investigative ophthalmology & visual science; 50: e-
abstract 4744
Lieby P., Barnes N, McCarthy C., Liu N., Dennett H.,
Walker J.G., Botea V., Scott A.F (2012). Substituting
depth for intensity and real-time phosphene rendering:
Visual navigation under low vision conditions, ARVO
2012, Fort Lauderdale, Florida, USA.
Markowitz S.N (2006), Principles of modern low vision
rehabilitation, Canadian Journal of Ophtalmology,
vol41, pp 289-312.
Parikh, N., Humayun, M. S., Weiland J. D (2010)
Mobility Experiments With Simulated Vision and
Peripheral Cues. ARVO 2010, Fort Lauderdale,
Florida, USA.
Sommerhalder J. R., Perez Fornos A., Chanderli K.,
Colin, L., Schaer X., Mauler F., Safran A. B. and
Pelizzone, M.,(2006). Minimum requirements for
mobility in unpredictable environments. Investigative
Ophthalmology & Visual Science, vol 47.
Tatur G., Marc I., Lafon D., Dupeyron G., Bardin F.,
Dumas M. (2011) Une approche fonctionnelle en
vision prothétique : étude préliminaire dans le contexte
de la mobilité. ASSISTH'2011, Paris, France
Van Rheede J. J., Kennard C., Hicks S., (2010).
Simulating prosthetic vision: optimizing the
information content of a limited visual display.
Journal of Vision, vol 10(14):32, pp1-15.
Wexler, M., Van Boxtel, J. J. A. (2005). Depth perception
by the active observer. Trends in cognitive sciences,
9(9), 431-438.
Zrenner, E., Benav H, Bruckmann, A., Greppmaier, U.,
Kusnyerik, A., Stett, A., Stingl, K., Wilke, R.(2010).,
Electronic Implants Provide Continuous Stable
Percepts in Blind Volunteers Only if the Image
Receiver is Directly Linked to Eye Movement, ARVO
2010, Fort Lauderdale, Florida, USA.
BIOSIGNALS2014-InternationalConferenceonBio-inspiredSystemsandSignalProcessing
126