approaches adopted by (Wolf and Hata, 2009)(Kwon
and Lee, 1999), it can be seen that due to the phys-
ical size, high power and processing requirement of
the laser range finders, they are used with large size
robots. Moreover, a single laser range finder can cost
from £800 to £3000 which makes it unsuitable to be
used in multi-robotic environment where the objec-
tive is also to keep the cost of each robot unit low. In
(Biber et al., 2004), the information from the laser
scanner is fused with the vision sensor to provide
a more accurate map of the environment. But this
further increases the computational demands of the
approach. In (Howard, 2004)(Latecki et al., 2007)
a multi-robot environment mapping problem is ad-
dressed. But the results are limited to simulations
only. In (Leon et al., 2009), a grid based mapping
solution using multiple robots is addressed, but it also
utilised high performance systems.
When using a group of small size robots, it is pre-
ferred to use simple and computationally less expen-
sive algorithms such that, the task can be achieved
with limited processing resources. In this research,
a distributed vision based multi-robot environment
mapping problem is addressed in which a group of
robots collectively try to obtain a common global map
of the environment using the visual clues they ob-
tain from their surrounding. The generated map is
intended to facilitate the multi-robot mission planning
as the environmental map together with the robots po-
sition on the map will be available. The problem ad-
dressed here is different from SLAM as in this case,
the robots are provided the localisation information
from the ceiling mounted camera system. The robots
can share information through a wireless communica-
tion medium. This medium is usually prone to noise
and act as a bottle neck for information distribution.
So in this research, it is aimed that the robot relies
on simple visual features from the images such that it
does not overload the network and at the same time,
it is sufficient to map the environment collectively.
These visual features are represented in the form of
a vector which are extracted by determining the dis-
tance to the neighbouring object using vision informa-
tion. Knowing the robot camera field of view, this dis-
tance vector feature can be used to map the environ-
ment if precise robot location and orientation are pro-
vided. In the following sections, the complete multi-
robot environment mapping approach is presented.
2 METHODOLOGY
To address the multi-robot environment mapping
problem, two Surveyor SRV1 robots equipped with
the vision sensor were used. The Surveyor robot
(Surveyor-Corporation, 2012) is shown in Figure 1a.
For obtaining the robot localisation information, a
ceiling cameras based robot localisation system was
used. This system comprises of two ceiling mounted
cameras and a server system. Using the visual infor-
mation from the two ceiling mounted cameras, this
system is responsible to determine the robots’ posi-
tions, track them and pass them to the robots. Each
robot creates an environmental map in its memory.
This map is also updated by all the other robots work-
ing in the environment. For this purpose, each robot
gets its location and orientation information from the
localisation system. Once a robot knows its location
and orientation (as explained in Section 2.2), then
in the direction of its heading, it utilizes the visual
clues it obtained from its vision sensor and deter-
mines the surrounding objects boundaries. The robot
uses these detected boundary information to update
its own map. Apart from updating its own map, the
robot also broadcasts this map update information to
the other members in the environment. On receiving
this map update information, each robot in the envi-
ronment also updates their map. This way, each robot
in the environment not only knows the other robot’s
positions but at the same time, it also keeps on main-
taining a common map which is built by the contribu-
tion of all the robots in the environment. Each robot
also passes the map update information to the server,
which is running the ceiling cameras based locali-
sation system. This way, the map building process
can be seen on the server side. As the robots being
used have limited on-board memory and processing
resources, so it was decided to use a very light weight
vision algorithm to solve this problem. This problem
was divided into two parts, that is Objects’ Boundary
Detection, Robot Localisation and Mapping. These
are explained in the following sections.
2.1 Objects’ Boundary Detection
To determine the objects’ boundaries, a segmentation
based algorithm was used. A similar approach was
used by (Ahmed et al., 2012b), where it was utilized
to develop an efficient vision based obstacle avoid-
ance algorithm. The vision based obstacle avoidance
algorithm, addressed by (Ahmed et al., 2012b), also
works in parallel to help the robot control algorithm
to avoid colliding with obstacles. If the vision based
obstacle avoidance algorithm gives the ground clear-
ance signal to the robot control algorithm, then map-
ping algorithm is called which requires surrounding
objects’ boundaries information. To explain the con-
cept of segmentation based object boundary detection
PECCS2013-InternationalConferenceonPervasiveandEmbeddedComputingandCommunicationSystems
50