ations in parallel, then the number of obstacle avoid-
ance approaches in hand is limited. In (Michels
et al., 2005) an obstacle avoidance technique using a
monocular vision camera together with a laser range
finder is addressed. The testing of the algorithm is
performed in a outdoors highly unstructured environ-
ment but the testing system used is not strictly an em-
bedded system as all the image processing is done on
a development platform so the chances of techniques
not meeting the real time constraints were also possi-
ble. Another common approach, addressed in (Chao
et al., 1999) (Borenstein and Koren, 1985) (Boren-
stein and Koren, 1988), is based on edge detection.
In this method, the algorithm determines the vertical
edges of the obstacle and helps robots to move around
the edges without colliding against the obstacles. In
(Pratt, 2007), the Lucas-Kanade optical flow based al-
gorithm is used for MAVs (Micro Aerial Vehicles)
in urban environment. Similarly, in (Souhila and
Karim., 2007) Horn and Schunck’s optical flow based
algorithm is used in autonomous robots. Using opti-
cal flow, image velocity vectors are determined which
can be split into translational and rotational compo-
nents. From the translational component, the time
to contact information for the obstacle can be calcu-
lated which helps in taking necessary actions. An-
other area of research in computer vision and robotics
targeted in this study is the development of efficient
visual odometery algorithm for small size robots. In
(Maimone et al., 2007), a feature tracking based mo-
tion estimation approach is presented to obtain visual
odometery information using stereo images captured
from NASA’s Mars Exploration Rovers (MERs) in a
highly unstructured environment. In (Campbell et al.,
2005), visual odometry results using optical flow in-
formation are presented when the ground robot is
moved on a varying terrain including indoor and out-
door environments. The errors reported were 3.3%
and 7.1% when the robot is moved on a carpet (high
friction) and polished concrete, respectively. Simi-
larly, in (Milford and Wyeth, 2008) (Kyprou, 2009) a
scanline intensity based simple algorithm to obtain vi-
sual odometry is presented. The odometry error with
this algorithm can be large but, in spite of this, no-
table results are achieved when the odometry infor-
mation is used with a SLAM (Simultaneous Localiza-
tion and Mapping) system. A relevant research done
in (Schaerer, 2006) addresses the use of line features
tracking. Using Hough Transform lines are tracked
for obtaining the distance and orientation information.
Another algorithm presented in (Younse and Burks,
2007) is based on feature tracking using the Lucas-
Kanade algorithm. It also utilizes the information ob-
tained from camera modelling (intrinsic and extrin-
sic parameters) to precisely locate the new position
of the vehicle. The average translation error reported
when the vehicle moved 30 cm is 4.8 cm and the aver-
age rotation errors were 1 and 8 degrees for a 45 and
180 degrees rotation, respectively. This approach per-
formed poorly at high rotation rate as features could
move out of the search window. Hence, in the field
of autonomous robotics, many approaches to visual
odometery (Maimone et al., 2007) (Nistar et al., 2006)
(Howard, 2008) using high speed systems are ad-
dressed and notable results are achieved. The high
computational cost of these algorithms makes them
unsuitable for swarms of small size robots. However,
further advances in research to provide fast and reac-
tive solutions to these problems is still required. In
the following sections, the methods used to perform
vision based obstacle avoidance and odometery are
detailed in section 2. In section 3, the results obtained
when the experiments performed in the in-door en-
vironment, are presented. Conclusions are drawn in
section 4.
2 METHODOLOGY
The hardware is an important factor which strongly
influences the methods adapted to solve the problem
at hand. The onboard processing on the robot (shown
in Figure 1) was achieved using a high performance
16/32-bit Blackfin BF537E processor. uClinux (mi-
cro controller linux), which is a powerful operating
system customized for embedded systems, was used
as the onboard operating system. Code compilation
was done using GNU cross compilers on a Linux
based development platform. For the testing and
demonstration of the developed vision algorithms,
SRV robot by Surveyor Corporation was used. Before
proceeding to the complex vision algorithms such as
obstacle avoidance and visual odometry, a library of
basic vision algorithms was developed. This library
was optimised especially for the Blackfin processor
architecture. It includes image conversion to differ-
ent formats (such as YUV to colour image), colour
to greyscale image, image gradient using Sobel and
Canny operator, region growing based image segmen-
tation, colour blob detection, feature detection using
Harris algorithm, cross-correlation based algorithm to
solve the feature correspondence problem, image ero-
sion and dilation algorithms. It was decided to add
more algorithms as the need arises. In the following
sections, the approaches used to perform vision based
obstacle avoidance and range of visual odometry al-
gorithms developed are discussed in detail.
PECCS 2012 - International Conference on Pervasive and Embedded Computing and Communication Systems
116