facial expression. 5th UKSim European Symposium
on Computer Modeling and Simulation (EMS), pages
196–201.
Anderson, K. and McOwan, P. W. (2006). A real-time au-
tomated system for the recognition of human facial
expressions. IEEE Transactions Systems, Man, and
Cybernetics, 36(1):96–105.
Atrey, K., Anwar Hossain, M., El-Saddik, A., and Kankan-
halli, S.-M. (2010). Multimodal fusion for multimedia
analysis: a survey. Multimedia System, pages 345–
379.
Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel,
I., and Movellan, J. (2006). Automatic recognition of
facial actions in spontaneous expressions. Journal of
Multimedia, pages 22–35.
Bartlett, M.-S., Gwen, L., Ian, F., and Javier, R.-M. (2003).
Real time face detection and facial expression recog-
nition: Development and applications to human com-
puter interaction. Computer Vision and Pattern Recog-
nition Workshop.
Bouguet, J. (2000). Pyramidal implementation of the lucas
kanade feature tracker. Intel Corporation, Micropro-
cessor Research Labs.
Bradski, G., Darrell, T., Essa, I., Malik, J., Perona, P.,
Sclaroff, S., and Tomasi, C. (2006). http ://source-
forge.net/projects/opencvlibrary/.
Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.-
M., Kazemzadeh, A., S., L., Neumann, U., and
Narayanan, S. (2004). Analysis of emotion recogni-
tion using facial expressions, speech and multimodal
information. 6th International Conference on Multi-
modal Interfaces, pages 205–211.
Chen, J., Chen, D., Gong, Y., Yu, M., Zhang, K., and Wang,
L. (2012). Facial expression recognition using geo-
metric and appearance features. Proceedings of the
4th International Conference on Internet Multimedia
Computing and Service, pages 29–33.
Fasel, I., Bartlett, M., and Movellan, J. (2002). A compari-
son of gabor filter methods for automatic detection of
facial landmarks. 5th International Conference on au-
tomatic face and gesture recognition, pages 345–350.
Gunes, H. and Piccardi, M. (2005). Affect recognition from
face and body: Early fusion vs. late fusion. IEEE In-
ternational Conference on Systems, Man and Cyber-
netics, 4:3437–3443.
Kotsia, I., Buciu, I., and Pitas, I. (2008a). An analysis of fa-
cial expression recognition under partial facial image
occlusion. Image and Vision Computing, 26(7):1052–
1067.
Kotsia, I. and Pitas, I. (2007). Facial expression recognition
in image sequences using geometric deformation fea-
tures and support vector machines. IEEE Transactions
on Image Processing, 16:172–187.
Kotsia, I., Zafeiriou, S., and Pitas, I. (2008b). Texture
and shape information fusion for facial expression and
facial action unit recognition. Pattern Recognition,
pages 833–851.
Kuncheva, L. I. (2002). A theoretical study on six classi-
fier fusion strategies. IEEE Transactions on Pattern
Analysis and Machine Intelligence, pages 281–286.
Lee, C.-J. and Wang, S.-D. (1999). Fingerprint feature ex-
traction using Gabor filters. Electronics Letters, pages
288–290.
Lucey, P., Cohn, J., Kanade, T., Saragih, J., Ambadar, Z.,
and Matthews (2010). The extended cohn-kanade
dataset (ck+): A complete dataset for action unit and
emotion- specified expression. IEEE Computer Vision
and Pattern Recognition Workshops, pages 94–101.
Mironica, I., Ionescu, B., P., K., and Lambert, P. (2013). An
in-depth evaluation of multimodal video genre cate-
gorization. 11th International workshop on content-
based multimedia indexing, pages 11–16.
Movellan, J. (2005). Tutorial on gabor filters. MPLab Tu-
torials, UCSD MPLab, Tech.
Niaz, U. and Merialdo, B. (2013). Fusion methods for
multi-modal indexing of web data. 14th International
Workshop Image Analysis for Multimedia Interactive
Services, pages 1–4.
Shan, C., Gong, S., and Mcowan, P. W. (2009). Facial ex-
pression recognition based on Local Binary Patterns :
A comprehensive study. Image and Vision Comput-
ing, 27:803–816.
Shi, J. and Tomasi, C. (1994). Good features to track.
IEEE Computer Society Conference on Computer Vi-
sion and Pattern Recognition., pages 593–600.
Snelick, R., Uludag, U., Mink, A., Indovina, M., and Jain,
A. (2005). Large-scale evaluation of multimodal bio-
metric authentication using state-of-the-art systems.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 27:450 –455.
Snoek, C. G. M. (2005). Early versus late fusion in semantic
video analysis. ACM Multimedia, pages 399–402.
Vinay, K. and Shreyas, B. (2006). Face recognition using
gabor wavelets. 4th Asilomar Conference on Signals,
Systems and Computers, pages 593–597.
Viola, P. and Jones, M. (2001). Robust real-time object de-
tection. In international journal of computer vision.
Vukadinovic, D. and Pantic, M. (2005). Fully automatic fa-
cial feature point detection using gabor feature based
boosted classifiers. IEEE Conference of Systems,
Man, and Cybernetics, pages 1692–1698.
Wallhoff, F. (2006). Facial ex-
pressions and emotion database,
http://www.mmk.ei.tum.de/ waf/fgnet/feedtum.html.
Wan, S. and Aggarwal, J. (2013). A scalable metric
learning-based voting method for expression recogni-
tion. 10th IEEE International Conference and Work-
shops on Automatic Face and Gesture Recognition
(FG), pages 1–8.
Zeng, Z., Pantic, M., Roisman, G.-I., and Huang, T.-S.
(2009). A survey of affect recognition methods: Au-
dio, visual, and spontaneous expressions. IEEE trans-
actions on pattern analysis and machine intelligence,
pages 39–58.
Zhang, L., Tjondronegoro, D., and Chandran, V. (2012).
Discovering the best feature extraction and selection
algorithms for spontaneous facial expression recogni-
tion. IEEE International Conference on Multimedia
and Expo, pages 1027–1032.
VariousFusionSchemestoRecognizeSimulatedandSpontaneousEmotions
431