integration strategy. For example (Ishiguro et al.,
1999) proposed a robotic architecture based on situ-
ated modules and reactive modules in which reactive
modules represent the purely reactive part of the sys-
tem, and situated modules are higher levels modules
programmed in a high-level language to provide spe-
cific behaviors to the robot. The situated modules are
evaluated serially in an order controlled by the mod-
ule controller. Research in nonverbal communication
in humans reveals a different picture in which mul-
tiple different processes do collaborate to realize the
natural action. For example (Argyle, 2001) showed
that human spatial behavior in close encounters can
be modeled with two interacting processes. It is possi-
ble in the selective framework to implement these two
processes as a single behavior but this goes against
the spirit of behavioral architectures that emphasizes
modularity of behavior (Perez, 2003). This leads to
the first requirement for HRI aware action integra-
tion: The action integration mechanism should allow
a continuous range from pure selective to pure com-
binative strategies. In other words the system should
use a hybrid integration strategy. The need to manage
the degree of combinativity based on the current situ-
ation entails the second requirement: The action inte-
gration mechanism should adapt to the environmental
state using timely sensor information as well as the
internal state of the robot. In current systems this re-
quirement is usually implemented by using a higher
level deliberative layer but in many cases the interac-
tion between simple reactive within the action inte-
grator can achieve the same result as will be shown in
the example implementation of this paper.
The Hybrid Coordination approach presented in
(Perez, 2003) is the nearest approach to achieve this
first requirement. In this system every two behav-
iors are combined using a Hierarchical Hybrid Co-
ordination Node that has two inputs. The output of
the HHCN is calculated as a nonlinear combination
of its two inputs controlled by the activation levels
of the source behaviors and an integer parameter k
that determines how combinative the HHCN is, where
larger values of k makes the node more selective. The
HHCNs are then arranged in a hierarchical structure
to generate the final command for every DoF of the
robot (Perez, 2003). Although experiments with the
navigation of an autonomous underwater robot have
shown that the hybrid coordination architecture can
outperform traditional combinative and selective ar-
chitectures, it still has some limitations in the HRI do-
main. One major limitation of the hybrid coordination
system is its reliance on binary HHCNs which makes
it unsuitable for large numbers of behaviors due to the
exponential growth in the number of HHCNs needed.
Another problem is the choice of the parameter k for
every HHCN. Yet the most difficult problem for this
system is figuring out the correct arrangement of the
behaviors into the HHCN inputs. This leads to the
third requirement: The action integration mechanism
should not depend on global relationships between
behaviors. One of the major problems with this ar-
chitecture is that every behavior must calculate its
own activation level. Although this is easy for be-
haviors like avoid-obstacles or go-to, it is very diffi-
cult for interactive processes like attend-to-human be-
cause the achievement of such interactive processes is
not manifested in an easily measurable specific goal
state that must be achieved or maintained but in the
exact way the overall behavior of the robot is chang-
ing over time. This leads to the fourth requirement:
The action integration mechanism should separate the
calculation of behavior’s influence from the behavior
computation.
The number of behaviors needed in interac-
tive robots usually is very high compared with au-
tonomously navigating robots if the complexity of
each behavior is kept acceptably low, but most of
those behaviors are usually passive in any specific
moment based on the interaction situation. This prop-
erty leads to the fifth requirement: The system should
have a built-in attention focusing mechanism. HRI
systems usually work in the real world with high lev-
els of noise but it is required that the robot shows a
form of goal directed behavior. This leads to the sixth
requirement: The action integration system should be
robust against noise and data loss to provide a goal-
directed behavior.
In summary the six requirements HRI imposes on
the action integration system are:
R1 It should allow a continuous range from pure se-
lective to pure combinative strategies
R2 It should adapt to the environmental state utilizing
timely sensor information.
R3 It should not depend on global relationships be-
tween behaviors
R4 It should separate the calculation of behavior’s in-
fluence from the behavior’s computation
R5 It should have built-in attention focusing
R6 It should be robust against noise and data loss.
Table 2 compares the action integration scheme of
some well used behavioral architectures with the pro-
posed system in terms of the six requirements
In this paper an action integration mechanism that
has the potential of meeting these requirements is pre-
sented.
ICINCO 2008 - International Conference on Informatics in Control, Automation and Robotics
42