of 2.5
◦
. The vector is then outputted to the controller.
Since the control algorithm doesn’t build a model,
there is no need to convert the pixel’s y-coordinates
to an absolute measurement e.g. cm or inch. Because
the resolution of the images is very low, the distance
estimation is not very accurate. At the lowest row of
the image, where the ratio between pixel and the pro-
jected real world area is highest, each pixel represents
an area of 2∗ 1.5cm
2
.
The distance that triggers the robot to turn is set
to 30 cm. The robot needs to turn fast enough so that
an obstacle will not be closer than 15 cm in front of it
since the distance of any object in this area can not be
calculated correctly. At maximum speed , the robot
will have about two seconds to react and if the robot
has already slowed down while approaching the ob-
ject, it will have about three seconds. We have tried
different combinations of trigger distances and turn-
ing speeds to achieve a desirable combination. The
first criteria is that the robot must travel safely, this
criteria sets the minimum turning speed and distance.
The width of view of the camera at the distance of
30 cm from the robot or 35 cm from the camera is
30 cm. The width of our robot is 20 cm, so if the vi-
sion module does not find an obstacle inside the trig-
ger range, the robot can safely move forward. The
second criteria is the robot needs to be able to go
to cluttered areas. This means it should not turn too
early when approaching objects. Also when the robot
is confronted by the wall or a large object, it should
turn just enough to move along the wall/object and
not bounce back. This criteria encourages the robot
to explore the environment.
4 EXPERIMENTS
4.1 Experiment Setup and Results
We tested the robot in two environments, a 1.5∗ 2.5m
2
artificial arena surrounded by 30cm height walls and
an office at the University of Kent Computing de-
partment, shown in Fig. 3. The surface of the arti-
ficial arena is a flat cartoon board with green wall-
papers on top. We put different real objects such as
boxes, shoes, books onto the arena. We first tested the
robot in the arena with no objects (the only obstacless
are walls) and then made the tests more difficult by
adding objects. The office is covered with a carpet.
The arena presents a more controlable environment
where the surface is smooth and relatively colour-
uniformed. The office environmnent is more chal-
lenging where even though the ground is flat its sur-
face is much more coarse and not colour-uniformed.
Figure 3: Snapshots of the robot in the test environments
and its trajectories. A: the artificial arena with 4 objects.
B: A small area near the office corner. C: A path that went
through a chair’s legs. D: An object with no base on the
ground.
For each test, the robot run for 5 mins. We placed
the robot in different places and put different objects
into the test area. In general, the robot is quite compe-
tent; Table I summaries the result. The vision-based
obstacle detection module correctly identified obsta-
cle with almost 100% accuracy, that is if there was an
obstacle in the camera view, the algorithm would reg-
ister an non-ground area. Although the calculated dis-
tances of obstacles are not very accurate, they provide
enough information for the controller to react. The
simple mechanism of finding an open space worked
surprisingly well. The robot was good at finding a
way out in a small area such as the areas under tables
and between chairs. The number of false positives are
also low and only occured in the office environment.
This is because the office’s floor colours are more dif-
ficult to capture thoroughly. Further analysis revealed
that false positives often occurred in the top part of the
images. This is explained by the ratio of pixels/area
in the upper part of the image being lower than the
bottom part. At the top row of the image, each pixel
corresponds to an area of 7 ∗ 4cm while at the bot-
tom row the area is 2∗ 1.5cm. Fortunately, the upper
part also corresponds to the further area in real world.
Therefore, most false positive cases resulted in unnec-
essary decreasing of speed but not changing direction.
Because of the robot’s reactive behaviour, it is capable
of responding quickly to changes in the environments.
During some of the tests, we removed and put obsta-
cles in front of the robot. The robot could react to the
changes and altered it’s running direction accordingly.
Fig. 3 shows 4 snapshots of the robot during op-
eration and its trajectory. In picture A, the robot ran
in the arena with 4 obstacles, it successfully avoided
ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics
278