and speech signal. In SICE, volume 3, pages 2890–
2895. IEEE.
Goldman, A. I. and Sripada, C. S. (2005). Simulationist
models of face-based emotion recognition. Cognition,
94(3):193–213.
Han, K., Yu, D., and Tashev, I. (2014). Speech emotion
recognition using deep neural network and extreme
learning machine. In Fifteenth annual conference of
the international speech communication association.
Hansen, J. H. and Bou-Ghazale, S. E. (1997). Getting
started with SUSAS: A speech under simulated and
actual stress database. In Fifth European Conference
on Speech Communication and Technology.
Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2002).
Human neural systems for face recognition and social
communication. Biological psychiatry, 51(1):59–67.
Huang, G.-B., Zhu, Q.-Y., and Siew, C.-K. (2006). Extreme
learning machine: Theory and applications. Neuro-
computing, 70(1-3):489–501.
Ioannou, S. V. et al. (2005). Emotion recognition through
facial expression analysis based on a neurofuzzy net-
work. Neural Networks, 18(4):423–435.
Jackson, P. and Haq, S. (2014). Surrey audio-visual ex-
pressed emotion SAVEE database. University of Sur-
rey: Guildford, UK.
Jerritta, S., Murugappan, M., Nagarajan, R., and Wan, K.
(2011). Physiological signals based human emotion
recognition: A review. In International colloquium
on signal processing and its applications, pages 410–
415. IEEE.
Jin, Q., Li, C., Chen, S., and Wu, H. (2015). Speech emo-
tion recognition with acoustic and lexical features. In
ICASSP, pages 4749–4753. IEEE.
Kahou, S. E. et al. (2013). Combining modality spe-
cific deep neural networks for emotion recognition in
video. In ICMI, pages 543–550. ACM.
Kim, J. and Andr
´
e, E. (2008). Emotion recognition based
on physiological changes in music listening. TPAMI,
30(12):2067–2083.
Kim, K. H., Bang, S. W., and Kim, S. R. (2004). Emo-
tion recognition system using short-term monitoring
of physiological signals. Medical and biological en-
gineering and computing, 42(3):419–427.
Kishore, K. K. and Satish, P. K. (2013). Emotion recogni-
tion in speech using MFCC and wavelet features. In
IACC, pages 842–847. IEEE.
Kobayashi, H. and Hara, F. (1992). Recognition of six basic
facial expression and their strength by neural network.
In International workshop on robot and human com-
munication, pages 381–386. IEEE.
Kohler, C. G. et al. (2000). Emotion recognition deficit in
schizophrenia: Association with symptomatology and
cognition. Biological psychiatry, 48(2):127–136.
Kontopoulos, S. and Drakopoulos, G. (2014). A space ef-
ficient scheme for persistent graph representation. In
ICTAI, pages 299–303. IEEE.
Kwon, O.-W., Chan, K., Hao, J., and Lee, T.-W. (2003).
Emotion recognition by speech signals. In Eighth
European conference on speech communication and
technology.
Lane, R. D. et al. (1996). Impaired verbal and nonverbal
emotion recognition in alexithymia. Psychosomatic
medicine, 58(3):203–210.
Li, L., Zhao, Y., Jiang, D., Zhang, Y., Wang, F., Gonzalez,
I., Valentin, E., and Sahli, H. (2013a). Hybrid deep
neural network–Hidden Markov model (DNN-HMM)
based speech emotion recognition. In Humaine Asso-
ciation Conference on Affective Computing and Intel-
ligent Interaction, pages 312–317. IEEE.
Li, S., Yi, D., Lei, Z., and Liao, S. (2013b). The CASIA
NIR-VIS 2.0 face database. In CVPR, pages 348–353.
Li, T. and Ogihara, M. (2004). Content-based music simi-
larity search and emotion detection. In ICASSP, vol-
ume 5, pages V–705. IEEE.
Li, X. et al. (2007). Stress and emotion classification using
jitter and shimmer features. In ICASSP, volume 4,
pages IV–1081. IEEE.
Lin, Y.-L. and Wei, G. (2005). Speech emotion recognition
based on HMM and SVM. In International conference
on machine learning and cybernetics, volume 8, pages
4898–4901. IEEE.
Lin, Y.-P. et al. (2010). EEG-based emotion recognition
in music listening. Transactions on biomedical engi-
neering, 57(7):1798–1806.
Livingstone, S. R., Peck, K., and Russo, F. A. (2012).
RAVDESS: The Ryerson audio-visual database of
emotional speech and song. In Annual meeting of the
Canadian society for brain, behaviour, and cognitive
science, pages 205–211.
Martin, O., Kotsia, I., Macq, B., and Pitas, I. (2006). The
eNTERFACE’05 audio-visual emotion database. In
ICDE, pages 8–8. IEEE.
Mathe, E. and Spyrou, E. (2016). Connecting a con-
sumer brain-computer interface to an internet-of-
things ecosystem. In PETRA, pages 90–95. ACM.
Mohammadi, Z., Frounchi, J., and Amiri, M. (2017).
Wavelet-based emotion recognition system using
EEG signal. Neural Computing and Applications,
28(8):1985–1990.
Murugappan, M., Ramachandran, N., and Sazali, Y. (2010).
Classification of human emotion from EEG using dis-
crete wavelet transform. Journal of biomedical sci-
ence and engineering, 3(04):390.
Nicholson, J., Takahashi, K., and Nakatsu, R. (2000).
Emotion recognition in speech using neural networks.
Neural computing and applications, 9(4):290–296.
Nwe, T. L., Foo, S. W., and De Silva, L. C. (2003). Speech
emotion recognition using hidden Markov models.
Speech Communication, 41(4):603–623.
Pan, Y., Shen, P., and Shen, L. (2012). Speech emotion
recognition using support vector machine. Interna-
tional Journal of Smart Home, 6(2):101–108.
Petrushin, V. A. (2000). Emotion recognition in speech sig-
nal: Experimental study, development, and applica-
tion. In Sixth international conference on spoken lan-
guage processing.
Picard, R. W. (2003). Affective computing: Challenges.
International Journal of Human-Computer Studies,
59(1-2):55–64.
WEBIST 2019 - 15th International Conference on Web Information Systems and Technologies
438