of the CCU and the drive, various secondary sen-
sors can easily be attached on any side or on top
of the robot. Depending on the particular task each
sensor may communicate via the CAN bus either
with the CCU, vision unit or even directly with the
drive.
Actuator(s): Actuators have to be connected via the
CAN-bus. A well defined and fixed set of (low-
level) action commands to be sent from the CCU
to the actuators is the basis for easy replacement of
actuators.
Figure 1: A robot soccer player as an example for the func-
tional structure of the modular robot architecture.
2 SOFTWARE ARCHITECTURE
A reconfigurable robot hardware needs on the soft-
ware side a flexible and modular architecture. To
achieve this goal we need a middleware which bridges
the gap between operating system and applications.
Our object oriented software architecture is based
on linux and the ICE middleware (Henning and
Spruiell, 2004). The Internet Communication Engine
(ICE) gives us the possibility to develop autonomous
software modules with different programming lan-
guages e.g Java, C++ and Python.
As shown in figure 1 the software architecture is
divided into a vision unit with low- and high level
processing, a world model and a central control unit
(CCU) for planning the actions of the robot.
2.1 Software Modularity
Suppose we have configured the five units of the robot
to solve a particular task. If we now want to reconfig-
ure the system for a new, completely different task,
we may for example need a different drive and a dif-
ferent actuator. Of course for a new task the software
of the CCU has to be replaced by a new one.
For exchanging the drive, due to the application in-
dependent fixed interface between CCU and drive, we
just attach any new drive (with its built in motor con-
troller) to the robot with no software changes at all in
the CCU or in the motor controller. Thus exchanging
drives works in a plug-and-play manner.
The interface between CCU and the actuator(s) is
more complex, because the commands to the actu-
ators are application specific. This means that the
CCU must send only commands which the actuator(s)
can interpret. The CCU programmer of course has
to know the actuator commands before programming
any actions.
The interface between CCU and the vision unit
seems to be even more complex because both units
have to use the same structure of the world model.
Thus, at least the mapping and tracking part of the vi-
sion unit has to be reprogrammed for every new CCU
software. A solution for this problem is based on a
generic vision unit which is able to detect objects
with a wide range of different shapes and colours.
When the CCU and the vision unit are connected,
they start an initial dialogue in which the CCU sends
a specification of the required world model structure
to the vision unit. For example in a soccer applica-
tion the CCU may send an object-description-list with
items like
object("ball", moving,
geometry(circle(20,25)),
color([0.9,1],[0.7,0.8],[0,0.5]),
object("goal1", fixed,
geometry(rectangle(200,100)),
color([0,0.3],[0,0.3],[0.9,1]),
describing an object “ball” as a moving object with
the shape of a circle, a diameter between 20 and 25
cm and orange colour. “goal1” is a fixed blue 200 ×
100 cm rectangle. After this initialization, when the
robot starts working, in each elementary perception-
action-cycle the vision unit tries to detect objects of
the types specified by the object-description-list and
returns them together with their current position. For
example in the simplest case the vision unit sends a
list like
detected("ball", pos(2.47,11.93)),
detected("goal1", pos(5.12,3.70)),
to the CCU. This description may be more com-
plex, including for example the size and color of the
ICINCO 2005 - ROBOTICS AND AUTOMATION
392