fore becomes statically unstable, the platform moves
in a way that puts the human demonstrator standing
on the platform in an unstable state that is directly
comparable to the instability of the humanoid robot.
The human demonstrator is forced to correct his/her
Figure 5: Inclining parallel platform that can rotate around
all three axes. The diameter of the platform is 0.7m and is
able to carry an adult human.
balance by moving the body. Consecutively, as the
motion of the human demonstrator is fed-forward
to the humanoid robot in real-time, the humanoid
robot gets back to the stable posture together with the
demonstrator. Using some practice, human demon-
strators easily learned how to perform in-place step-
ping on the humanoid robot. The obtained trajecto-
ries can afterwards be used to autonomously control
the in-place stepping of the humanoid robot. Our fu-
ture plans are to extend this approach and use it for
acquiring walking of the humanoid robots. Figure 6
shows the human demonstrator and Fujitsu Hoap-3
humanoid robot during the in-place stepping experi-
ment.
Figure 6: The human demonstrator and Fujitsu Hoap-3 hu-
manoid robot are shown during the in-place stepping exper-
iment. The video frame on the left side shows the human
demonstrator performing in-place stepping on the inclining
parallel platform. The right side frame shows the humanoid
robot during the one foot posture.
4 CONCLUSIONS
A goal of imitation of motion from demonstration is
to remove the burden of robot programming from the
experts by letting non-experts to teach robots. The
most basic method to transfer a certain motion from
a demonstrator to a robot would be to directly copy
the motor commands of the demonstrator to the robot
(Atkeson et al., 2000) and to modify the motor com-
mands accordingly to the robot using a sort of a local
controller. Our approach is different in the sense that
the correct motor commands for the robot are pro-
duced by the human demonstrator. For this conve-
nience, the price one has to pay is the necessity of
training to control the robot to achieve the desired
action. Basically, instead of expert robot program-
ming our method relies on human visuo-motor learn-
ing ability to produce the appropriate motor com-
mands on the robot, which can be played back later
or used to obtain controllers through machine learn-
ing methods as in our case of reaching.
The main result of our study is the establishment
of the methods to synthesize the robot motion using
human visuo-motor learning. To demonstrate the ef-
fectiveness of the proposed approach, statically stable
reaching and in-place stepping was implemented on a
humanoid robot using the introduced paradigm.
ACKNOWLEDGEMENTS
The research work reported here was made possible
by Japanese Society for promotion of Science and
Slovenian Ministry of Higher Education, Science and
Technology.
REFERENCES
Atkeson, C., Hale, J., Pollick, F., Riley, M., Kotosaka, S.,
Schaal, S., Shibata, S., Tevatia, T., Ude, A., Vijayaku-
mar, S., and Kawato, M. (2000). Using humanoid
robots to study human behavior. IEEE Intelligent Sys-
tems, 15:45–56.
Goldenberg, G. and Hagmann, S. (1998). Tool use and me-
chanical problem solving in apraxia. Neuropsychol-
ogy, 36:581–589.
Oztop, E., Lin, L.-H., Kawato, M., and Cheng, G. (2006).
Dexterous skills transfer by extending human body
schema to a robotic hand. In IEEE-RAS International
conference on humanoid robotics.
Schaal, S. (1999). Is imitation learning the route to hu-
manoid robots? Trends Cogn Sci, 3:233–242.
ROBOT SKILL SYNTHESIS THROUGH HUMAN VISUO-MOTOR LEARNING - Humanoid Robot Statically-stable
Reaching and In-place Stepping
215