sually impaired people are experts at using the tactile
feedback from the cane to understand their surroun-
dings at ground level, a white cane, by its nature,
can not give any warning about obstacles at chest or
head level. Whether they are low branches in natu-
ral settings, or signs, cordons, or temporary fences in
an urban environment, such obstacles are extremely
common, and an unexpected collision with them has
the potential to cause serious injury. This represents
an unfortunate constraint on personal independence,
since it means proceeding with great caution, and pos-
sibly making use of a guide dog, a sighted friend or
assistant, or otherwise unwanted hats or similar head-
wear.
The addition of a range sensor to the cane which
points upwards can help with this problem. As the
user sweeps the cane, receiving tactile feedback from
the ground, they also receive feedback from the sensor
about the distance to the nearest obstacle at a higher
level.
To advance the state of the art we should therefore
ask: how can the next generation of this technology
improve on what currently exists? There are lots of
possibilities. A crucial one is that what exists today
typically relies on a single sensor, say one based on
sonar, radar, or a laser. Each type of sensor has its
own strengths and weaknesses. Sensors may perform
poorly for surfaces with certain reflectivities, curva-
tures or other properties, or in conditions of varying
ambient light and humidity, whereas other sensors are
likely to have complementary capabilities. Similarly,
some sensors are well suited to resolving small or dis-
tant objects, but have a narrow field of view, whereas
others may resolve detail less well, but cover a broad
area.
Another important potential improvement is in the
cognitive load placed on the user. When the feedback
presented to the user is in the form of relatively direct
range readings from the sensor, the user’s brain must
do the work of interpreting this information and re-
constructing the three dimensional environment. This
may involve getting used to the quirks and idiosyncra-
sies of the sensor. In a system with multiple sensors,
this problem would be compounded. The user must
then combine this with their model of the ground-
level situation, their plans and route, and their kno-
wledge of contextual issues such as the location of
pedestrian crossings and amenities.
By combining the inputs of multiple sensors, and
processing them to make the most useful informa-
tion more salient to the user, the usefulness of cane-
mounted sensors could be greatly improved: these are
the prospects identified by the INSPEX project; its
aim is to realize them. This entails that significant
technical challenges be met.
In INSPEX, the feedback given to the visually
impaired user is presented via 3D immersive sound,
transmitted to the user via binaural headphones. In
order to be convincing for the user, the sound picture
must be stable with respect to a 3D inertial frame, so
as well as the issues of cognitive load etc., discus-
sed above, the INSPEX system must be aware of the
user’s head movements, in order to achieve the nee-
ded spatial stability for the sound image. This adds
another technical challenge.
1.2 The INSPEX Project
The INSPEX project (INSPEX Homepage, 2017) is
an international collaboration with the goal of deve-
loping a small, lightweight system which combines
the inputs of multiple sensors and integrates their rea-
dings into a three dimensional model of the obstacles
in its surroundings. Such a system would have multi-
ple potential applications, including autonomous dro-
nes and fire-fighters working in low-visibility condi-
tions. The main focus, however, is on the use case
of assistive technology which could be attached to a
white cane, as described above.
Achieving this ambition means facing several fun-
damental obstacles. To be a useful tool to a visually
impaired person, INSPEX must provide reliable, high
performance functionality over the course of many
hours. If the user is required to stop several times
per day to re-charge batteries, then their independence
and quality of life will not have been significantly im-
proved. At the same time, the system must be held
in the hand and moved continuously by the muscles
of the wrist for long stretches of time. This implies
that the system must be lightweight; otherwise, rather
than enhancing their day-to-day life, the system could
cause injury to the user. These requirements translate
into a number of technical challenges.
First, the sensors themselves need to be signifi-
cantly miniaturized, and their weight and power con-
sumption must be reduced. Concomitantly, the com-
putation power needed for processing must be tightly
controlled, so that lighter, more efficient processors
can be used. For this reason, the standard Occupancy
Grid algorithm used for obstacle detection in the auto-
motive domain has had to be significantly optimized
for architectures with fewer facilities than its usual
implementations (Dia et al., 2017). This constraint
also implies that clever optimizations will be requi-
red throughout the basic utility systems which under-
pin the advanced data processing. These will have
to function with minimal memory, and will have to
share hardware resources with other parts of the sy-
Formal Verification for Advanced Sensing Applications: Data Pre-processing in the INSPEX System
665