(CVPR), pages 3382–3389, Washington, DC. IEEE
Computer Society.
Bremner, P., Celiktutan, O., and Gunes, H. (2016). Person-
ality perception of robot avatar tele-operators. In Pro-
ceeding of the 11th ACM/IEEE International Confer-
ence on Human-Robot Interaction (HRI), pages 141–
148, Christchurch, New Zealand.
Chen, Y. M., Huang, F. C., Guan, S. H., and Chen, B. Y.
(2012). Animating lip-sync characters with dominated
animeme models. IEEE Transactions on Circuits and
Systems for Video Technology, 22(9):1344–1353.
Cootes, T. F., Edwards, G. J., and Taylor, C. J. (2001). Ac-
tive appearance models. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 23(6):681–
685.
CrazyTalk. (2018). Create 3D talking heads with
CrazyTalk. Retrieved from https://www.reallusion.
com/crazytalk/
Faceshit. (2018). Faceshift. Retrieved from http://openni.
ru/solutions/faceshift/index.html
Huang, F.-C., Chen, Y.-M., Wang, T.-H., Chen, B.-Y., and
Guan, S.-H. (2009). Animating lip-sync speech faces
by dominated animeme models. In SIGGRAPH ’09:
Posters, pages 2:1–2:1, New York, NY. ACM.
Krishnan, S. T. and Gonzalez, J. U. (2015). Building Your
Next Big Thing with Google Cloud Platform: A Guide
for Developers and Enterprise Architects. Apress,
Berkely, CA, USA, 1st edition.
MHX2. (2017). Mhx2 documentation. Retrieved from
https://thomasmakehuman.wordpress.com/mhx2-
documentation
Mullen, T. (2012). Mastering Blender. SYBEX Inc.,
Alameda, CA, USA, 2nd edition.
Quicktalk. (2017). Quicktalk lip synch addon. Retrieved
from Available: https://tentacles.org.uk/quicktalk
Russell, J., & Cohn, R. (2012). Makehuman. Book on De-
mand. Retrieved from https://books.google.ca/books?
id=TFeaMQEACAAJ
Quicktalk (2017). Quicktalk lip synch addon.
Russell, J. and Cohn, R. (2012). Makehuman. Book on
Demand.
Shaked, N. A. (2017). Avatars and virtual agents - relation-
ship interfaces for the elderly. Healthcare Technology
Letters 4.3, pages 83–87.
SpeechRecognition. (2017). Speechrecognition 3.8.1 :
Python package index - pypis. Retrieved from https://
pypi.python.org/pypi/SpeechRecognition/ (Accessed
30- April-2017)
VirtualGL. (2018). Virtualgl the virtualgl project. Retrieved
from https://www.virtualgl.org/
Wan, V., Anderson, R., Blokland, A., Braunschweiler, N.,
Chen, L., Kolluru, B., Latorre, J., Maia, R., Stenger,
B., Yanagisawa, K., Stylianou, Y., Akamine, M.,
Gales, M., and Cipolla, R. (2013). Photo-realistic ex-
pressive text to talking head synthesis. In Proceedings
of the Annual Conference of the International Speech
Communication Association (INTERSPEECH), Lyon,
France.
Wang, L., Han, W., and Soong, F. K. (2012). High quality
lip-sync animation for 3d photo-realistic talking head.
In Proceeding of IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP),
pages 4529–4532, Kyoto, Japan.
Zen, H., Nose, T., Yamagishi, J., Sako, S., Masuko, T.,
Black, A. W., and Tokuda, K. (2007). The HMM-
based speech synthesis system (HTS) version 2.0. In
Proceedings of the 7th ISCA Tutorial and Research
Workshop on Speech Synthesis (SSW), Kyoto, Japan.
Zoric, G. and Pandzic, I. S. (2005). A real-time lip sync
system using a genetic algorithm for automatic neural
network configuration. In Proceeding of IEEE Inter-
national Conference on Multimedia and Expo, pages
1366–1369, Amsterdam, Netherlands.
Leveraging Cloud-based Tools to Talk with Robots
367