avatars are available as agents. The previously de-
scribed perception and communication functions can
be enabled by using dedicated C++ APIs to define the
actions of agents. Some of the APIs that can be used
are listed in Table 1. In the future, I plan to extend the
programming beyond just C++ to include interpreter
languages such as Python. The avatars do not just be-
have as programmed—they can also act on the basis
of instructions given to them by operators in real time.
To simulate perception, it is necessary to spread
the load so the system is configured to enable calcula-
tions not just by the server system but also by individ-
ually installed perception simulation servers. More
specifically, the module that provides a pixel map of
an image to simulate the sense of sight is operated
by the perception simulation server, not the central
server.
The configuration of the SIGVerse software is
shown in Figure 3.
3 EXAMPLE OF SIGVERSE USE
A feature of SIGVerse is the way in which dynamic
calculations, perception simulations, and communica-
tion simulations can be done simultaneously. In this
section, I describe an example of humans and robots
working in partnership to execute a task, and another
example of multi-agent system, as examples of appli-
cations that fully utilize all three of these functions.
3.1 Use as Evaluation of
Human-machine Cooperation
The objectives of the developers who use this sim-
ulation are to determine how to develop the intelli-
gence of a robot that can execute a task in partnership
with a human, and how to implement efficient coop-
erative behavior. The developers created decision and
action modules for the robot while adopting various
different models and hypotheses, and have confirmed
their performance on the simulator. During the simu-
lation, cooperation is required between a real-life hu-
man and a robot, which cannot be implemented oth-
erwise without purchasing and developing a life-size
humanoid robot. In this simulation, the operator who
is in partnership with the robot manipulates an avatar
in a virtual environment to reproduce cooperative ac-
tions between a user and a humanoid robot. An intel-
ligence module created by the developers uses virtual
equivalents of the senses of sight and hearing to com-
prehend the situation within that space and recognize
the state of the user, performs dynamic calculations
to control arms, and also simulates communications
between the avatar and the robot. Expanding on this
kind of usage example will not only further research
into simple human-machine cooperation, it will also
enable the construction of a research and teaching
system with a competitive base for applications such
as Robocup(Kitano et al., 1998).
Taking the above application as an example, I im-
plemented a situation in which a human being and a
robot cooperate in the task of ”cooking okonomiyaki”
in SIGVerse (”okonomiyaki”is a popular cook-at-the-
table food in Japan, like a thick pancake). Examples
of the screens during the execution of this application
are shown in Figure 2. The GUI that the operator can
use has buttons such as ”flip the pancake”, ”oil the
hotplate”, ”apply sauce”, and ”adjust the heat”.
Furthermore, providing immersive interface to
the users is very important to conduct realistic phy-
cophisical experiments throught the simulator. Fig.3
shows an example in which the user can operate
the cooking devices with haptic interface PHANToM
Omni to manipulate the “okonomiyaki”.
The objectiveof the task is to cooperate in cooking
the okonomiyaki as fast as possible, without burning
it. The operator basically uses the GUI to propel the
work forward, but the robot continuously judges the
current situation and, if it considers it can do some-
thing in parallel with the work that the operator is do-
ing, asks the operator questions such as ”Should I oil
the hotplate now?” or ”Should I turn the heat down?”
It then executes those jobs while viewing the opera-
tor’s responses. Figure 2 shows a scene in which the
avatar in the virtual environment is about to flip the
pancake based on the operator’s instructions with a
help of robot agent.
I performed experiments on two cases: one in
which the operator performed all of the steps through
the GUI, and one in which the robot did suitable parts
of the operator’s work instead. In the first case in
which the operator did all of the work, the task re-
quired three minutes 14 seconds before it was fin-
ished, but in the second case involving cooperation,
the task took one minute 58 seconds to complete. In
this manner, it is possible to make effective use of
this system as a tool for quantitatively evaluating the
human-machine cooperation systems.
3.2 Introduction of Immersive
Interaction Space for the SIGVerse
Above applications used an usual display interface
such as web browser worked on personal computer.
However, if the aim of the application is to treat natu-
ral and real motion patterns of whole body that should
connect real world and cyber world, interface devices
SIMULTECH 2011 - 1st International Conference on Simulation and Modeling Methodologies, Technologies and
Applications
432