on Multimodal Interaction, ICMI ’15, pages 459–466,
New York, NY, USA. ACM.
Kim, B.-K., Lee, H., Roh, J., and Lee, S.-Y. (2015). Hier-
archical committee of deep cnns with exponentially-
weighted decision fusion for static facial expression
recognition. In Proceedings of the 2015 ACM on Inter-
national Conference on Multimodal Interaction, ICMI
’15, pages 427–434, New York, NY, USA. ACM.
Kotsia, I. and Pitas, I. (2007). Facial expression recognition
in image sequences using geometric deformation fea-
tures and support vector machines. IEEE Transactions
on Image Processing, 16(1):172–187.
Lee, S. H., Plataniotis, K. N., and Ro, Y. M. (2014). Intra-
class variation reduction using training expression im-
ages for sparse representation based facial expression
recognition. IEEE Transactions on Affective Comput-
ing, 5(3):340–351.
Lei, G., Li, X.-h., Zhou, J.-l., and Gong, X.-g. (2009). Geo-
metric feature based facial expression recognition us-
ing multiclass support vector machines. In Granular
Computing, 2009, GRC’09. IEEE International Con-
ference on, pages 318–321. IEEE.
Liu, M., Li, S., Shan, S., and Chen, X. (2013). Au-aware
deep networks for facial expression recognition. In
2013 10th IEEE International Conference and Work-
shops on Automatic Face and Gesture Recognition
(FG), pages 1–6.
Liu, M., Wang, R., Li, S., Shan, S., Huang, Z., and Chen,
X. (2014a). Combining multiple kernel methods on
riemannian manifold for emotion recognition in the
wild. In Proceedings of the 16th International Con-
ference on Multimodal Interaction, ICMI ’14, pages
494–501, New York, NY, USA. ACM.
Liu, P., Han, S., Meng, Z., and Tong, Y. (2014b). Fa-
cial expression recognition via a boosted deep belief
network. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
1805–1812.
Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z.,
and Matthews, I. (2010). The extended cohn-kanade
dataset (ck+): A complete dataset for action unit and
emotion-specified expression. In 2010 IEEE Com-
puter Society Conference on Computer Vision and
Pattern Recognition - Workshops, pages 94–101.
Lyons, M., Akamatsu, S., Kamachi, M., and Gyoba,
J. (1998). Coding facial expressions with gabor
wavelets. In Proceedings Third IEEE International
Conference on Automatic Face and Gesture Recogni-
tion, pages 200–205.
Ng, H.-W., Nguyen, V. D., Vonikakis, V., and Winkler, S.
(2015). Deep learning for emotion recognition on
small datasets using transfer learning. In Proceedings
of the 2015 ACM on International Conference on Mul-
timodal Interaction, ICMI ’15, pages 443–449, New
York, NY, USA. ACM.
Perveen, N., Gupta, S., and Verma, K. (2012). Facial ex-
pression recognition using facial characteristic points
and gini index. In 2012 Students Conference on Engi-
neering and Systems, pages 1–6.
Perveen, N., Roy, D., and Mohan, C. K. (2018). Sponta-
neous expression recognition using universal attribute
model. IEEE Transactions on Image Processing,
27(11):5575–5584.
Perveen, N., Singh, D., and Mohan, C. K. (2016). Sponta-
neous facial expression recognition: A part based ap-
proach. In 2016 15th IEEE International Conference
on Machine Learning and Applications (ICMLA),
pages 819–824.
Petridis, S., Martinez, B., and Pantic, M. (2013). The mah-
nob laughter database. Image and Vision Computing,
31(2):186 – 202. Affect Analysis In Continuous Input.
Snell, R. (2008). Clinical Anatomy by Regions. Lippincott
Williams & Wilkins.
Sun, B., Li, L., Zuo, T., Chen, Y., Zhou, G., and Wu, X.
(2014). Combining multimodal features with hierar-
chical classifier fusion for emotion recognition in the
wild. In Proceedings of the 16th International Con-
ference on Multimodal Interaction, ICMI ’14, pages
481–486, New York, NY, USA. ACM.
Yao, A., Shao, J., Ma, N., and Chen, Y. (2015). Captur-
ing au-aware facial features and their latent relations
for emotion recognition in the wild. In Proceedings of
the 2015 ACM on International Conference on Mul-
timodal Interaction, ICMI ’15, pages 451–458, New
York, NY, USA. ACM.
Yu, X., Zhang, S., Yan, Z., Yang, F., Huang, J., Dunbar,
N. E., Jensen, M. L., Burgoon, J. K., and Metaxas,
D. N. (2015). Is interactional dissynchrony a clue to
deception? insights from automated analysis of non-
verbal visual cues. IEEE Transactions on Cybernetics,
45(3):492–506.
Zhan, C., Li, W., Ogunbona, P., and Safaei, F. (2008).
A real-time facial expression recognition system for
online games. Int. J. Comput. Games Technol.,
2008:10:1–10:7.
Zhao, K., Chu, W. S., la Torre, F. D., Cohn, J. F., and Zhang,
H. (2016). Joint patch and multi-label learning for
facial action unit and holistic expression recognition.
IEEE Transactions on Image Processing, 25(8):3931–
3946.
VISAPP 2020 - 15th International Conference on Computer Vision Theory and Applications
102