Vision System of Facial Robot SHFR- III for Human-robot
Interaction
Xianxin Ke, Yujiao Zhu, Yang Yang, Jizhong Xing and Zhitong Luo
Department of mechanical and electrical engineering, Shanghai University, NO.149 Yanchang Raod, Shanghai, China
Keywords: Facial Expression Robot, Visual System, Human-robot Interaction.
Abstract: The improvement of human-robot interaction is an inevitable trend for the development of robots. Vision is
an important way for a robot to get the information from outside. Binocular vision model is set up on the
facial expression robot SHFR- III, this paper develops a visual system for human-robot interaction,
including face detection, face location, gender recognition, facial expression recognition and reproduction.
The experimental results show that the vision system can conduct accurate and stable interaction, and the
robot can carry out human-robot interaction.
1 INTRODUCTION
The population aging is more and more serious in
today's society, housekeeping and family nursing in
the family are more and more demand for the robot.
If robots can naturally communicate with people and
express feelings at work, they will be better to help
people in the psychological and physiological.
Psychologists’ studies show that only 7% of
information is transferred by spoken language, while
38% is expressed by paralanguage and 55% is
transferred by facial expressions. It can be seen that,
facial expression robot continue to enter the people's
lives, which will be an inevitable trend.
Based on the static or dynamic human face
image, the robot can determine the relative position
of them, which will be widely used in the future.
The robot can control the robot's position and select
the appropriate address and different communication
methods, according to the information of the
position, the expression and the gender. The visual
research can provide personalized human-robot
interaction mode for the robot, which is helpful to
build complex and intelligent human-robot
interaction system.
With the improvement of the humanoid robot
human-robot interaction attention, more and more
organizations begin to participate in them. Foreign
studies carried out earlier, the early classic is infant
Kismet robot of MIT(Massachusetts Institute of
technology) that can be a natural and intuitive
interaction, whose vision system are consist of face
detection, motion detection and skin colour
detection. KOBIAN-RII (Trovato et al., 2012), a
relatively mature research by the Waseda University
in Japan, can achieve dynamic emotional expression.
And Italy's programmable humanoid robot iCub
(Parmiggiani et al., 2012), mainly used for children's
cognitive process.
Based on the facial robot SHFR- III, this paper
develop visual system of robot, including face
detection, face location, gender recognition, facial
expression recognition and reproduction, which have
great significance for the further development of
human-robot interaction system.
2 FACIAL ROBOT SHFR- III
The platform is the facial robot SHFR- III. The
robot's head is divided into four parts: the eyebrows
mechanism, the eye mechanism, the jaw mechanism,
the neck mechanism. It have 22 degrees of freedom,
and each degree of freedom is controlled by a
steering gear. PC sends a command to down-bit
machine FPGA via serial communication. After
getting instruction information the next crew sent
PWN signal to the steering engine, and then the
steering engine control rotation Robot expression.
Face robot head and control box as shown in Fig. 1.
The binocular vision model of facial robot
consists of two parts: hardware and software. The
software is composed of the host computer vision
processing algorithm, the target location algorithm
472
Ke, X., Zhu, Y., Yang, Y., Xin, J. and Luo, Z.
Vision System of Facial Robot SHFR- III for Human-robot Interaction.
DOI: 10.5220/0005994804720478
In Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2016) - Volume 2, pages 472-478
ISBN: 978-989-758-198-4
Copyright
c
2016 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved