Vision System of Facial Robot SHFR- III for Human-robot Interaction

Xianxin Ke, Yujiao Zhu, Yang Yang, Jizhong Xin, Zhitong Luo

2016

Abstract

The improvement of human-robot interaction is an inevitable trend for the development of robots. Vision is an important way for a robot to get the information from outside. Binocular vision model is set up on the facial expression robot SHFR- III, this paper develops a visual system for human-robot interaction, including face detection, face location, gender recognition, facial expression recognition and reproduction. The experimental results show that the vision system can conduct accurate and stable interaction, and the robot can carry out human-robot interaction.

References

  1. Delaunay F, Greeff D J, M. Belpaeme T, 2009, Towards retro-projected robot faces: An alternative to mechatronic and android faces, The 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, pp. 306-311.
  2. G. Trovato, T. Kishi, N. Endo et al, 2012, Development of facial expressions generator for emotion expressive humanoid robot, 2012 12th IEEE-RAS International Conference on Humanoid Robots, pp.303-308.
  3. Alberto Parmiggiani, Giorgio Metta, Nikos Tsagarakis, 2012, The mechatronic design of the new legs of the iCub robot, IEEE-RAS International Conference on Humanoid Robots,Japan, pp.481-486.
  4. Peleshko D,Soroka K, 2013, Research of Usage of Haarlike Features and AdaBoost Algorithm in Viola-Jones Method of Object Detection, International Conference on the Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), pp.284- 286.
  5. Quanlong Li, Qing Yang, Shaoen Wu, 2014, Multi-bit sensing based target localization (MSTL) algorithm in wireless sensor networks, 2014 23rd International Conference on Computer Communication and Networks (ICCCN). Shanghai. pp.1-7.
  6. Lienhart R, Maydt J, 2002, An extended set of Haar-like features for rapid object detection, 2002 International Conference on Image Processing, pp.900-903.
  7. Ekman P, Rosenberg E L, 2005, What the face reveals: basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS),” Second Edition. Stock: Oxford University Press.
  8. Lucey P, Cohn J F, Kanade T, et al, 2010, The extended Cohn-Kanade dataset (CK+): A complete facial expression dataset for action unit and emotionspecified expression Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis, pp.94-101.
  9. Phillips P J, Wechsler H, Huang J, et al, 1998,The FERET database and evaluation procedure for face recognition algorithm, Image and Vision Computing, pp.295-306.
Download


Paper Citation


in Harvard Style

Ke X., Zhu Y., Yang Y., Xin J. and Luo Z. (2016). Vision System of Facial Robot SHFR- III for Human-robot Interaction . In Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, ISBN 978-989-758-198-4, pages 472-478. DOI: 10.5220/0005994804720478


in Bibtex Style

@conference{icinco16,
author={Xianxin Ke and Yujiao Zhu and Yang Yang and Jizhong Xin and Zhitong Luo},
title={Vision System of Facial Robot SHFR- III for Human-robot Interaction},
booktitle={Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,},
year={2016},
pages={472-478},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005994804720478},
isbn={978-989-758-198-4},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 13th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,
TI - Vision System of Facial Robot SHFR- III for Human-robot Interaction
SN - 978-989-758-198-4
AU - Ke X.
AU - Zhu Y.
AU - Yang Y.
AU - Xin J.
AU - Luo Z.
PY - 2016
SP - 472
EP - 478
DO - 10.5220/0005994804720478