perception of the position of a sound source placed in
a three dimensional space compared to a fixed head
(Wu et al., 1997). But in simulations with a keyboard
as an input device, head movements are not as intu-
itive as they are in the real world. Providing a sin-
gle open source simulation system for research groups
working in that area removes duplicated work, help-
ing development being focused on the key aspects of
a project. Also an accumulated effort of a commu-
nity driven project will lead to a far more advanced
simulation compared to the scope of work a single re-
search team can gain. An example of such a widely
used framework in the domain of mobile robotics is
the Robot Operating System (ROS)
1
. It combines a
communication middleware as well as simulation en-
vironment and many modules implementing common
algorithms as well as hardware drivers used in mo-
bile robotics. It is an open source project supported
by a wide community. Thus the decision to develop a
framework for ETAs is justified. Section 2 outlines a
few ETAs, the GIVE-ME framework and ROS. Sec-
tion 3 lists requirements for an ETA framework, fol-
lowed by a discussion of disadvantages and advan-
tages of such a framework.
2 RELATED WORK
Dakopouls and Bourbakis divided ETAs into three
categories based on their feedback interfaces. The
categories for classification are “Audio feedback” ,
“Tactile feedback” and “w/o interface”. (Dakopou-
los and Bourbakis, 2010) Given below three projects
are described and categorized. After that the GIVE-
ME framework and ROS are introduced as a basis for
the intended framework. The Tyflos project assists vi-
sual impaired in navigation tasks and the environment
recognition.
“In particular, the Tyflos prototype inte-
grates a wireless portable computer, cameras,
range and GPS sensors, microphones, natural
language processor, text-to-speech device, an
ear speaker, a speech synthesizer, a 2D vibra-
tion vest and a digital audio recorder.” (Bour-
bakis et al., 2008)
Tyflos is a multi-modal assistance system which inter-
acts via different input and output channels to gather
or present information from the user. A tactile device
to serve an output channel is the vibration vest. Like
setting a pixel on a two dimensional monitor the vest
vibrates on a two dimensional position. To gather in-
formation of the environment or get instructions from
1
http://www.ros.org/
the user the project uses a camera or a microphone.
This project is categorized as “Tactile feedback”.
Gomez Valencia developed a few systems to assist
visual impaired and described them in his PhD The-
sis. He developed a system to recognize street signs
and read them to the user. This assists the user in
case of self localization in a city. Additionally he de-
veloped an acoustic display to present different colors
by different instruments. He developed an obstacle
detection and a tactile input device, too. (Gomez Va-
lencia, 2014) Because of the acoustic display for in-
formation presentation this project is categorized as
“Audio feedback”.
The sound of vision project
2
did much research on
three dimensional sound for acoustic displays. There-
for they developed glasses as a device to capture the
environmental and present information to the user via
sound. To provide their results to a community they
plan a reusable sonification library and a reusable
training serious game as part of their “additional out-
puts”. This project comes close to the intended open
community. But it only provides the results with a
library but not a whole framework with the possibil-
ity to exchange a device to test algorithms with other
hardware. This project is also classified as “Audio
feedback” even if it has a few elements of “Tactile
feedback” by providing a haptic west.
Khoo describes a real framework. The GIVE-
ME framework was developed under the key points
of maintaining a simulation environment for devices
with the intention to use gamification aspects in user
tests to make the tests more interesting. He describes
the framework with the simulator and the workflow of
keeping this framework alive. (Khoo, 2016) But the
last changes to his website were in March of 2016.
ROS is a frameworkfor distributed systems driven
by a large and vital community. The active com-
munity is a big benefit in contrast to the GIVE-ME
framework. ROS provides the ability for modules to
communicate or interact in different ways. E.g. a
module encapsulates an algorithm or a device driver.
The modules could be distributed over a network.
ROS also implements different tools to analyze live
data or replay recorded data. A mass of modules
for typical problems in mobile robotics such as self
localization or navigation are implemented and pro-
vided by ROS. ROS will be used as the basis for
the intended framework because of its scope and the
similarities between problems in mobile robotics and
ETAs for visual impaired.
2
http://www.soundofvision.net/