other corpora. However, given the simplicity of the
method and the variety of the already tested corpora
we are confident that the ELM/ELM-AE can achieve,
at least, comparable results to the SVM on other
databases as well.
REFERENCES
Bengio, Y., Courville, A., and Vincent, P. (2013). Represen-
tation learning: A review and new perspectives. IEEE
Transactions on Pattern Analysis & Machine Intelli-
gence, 35(8):1798–1828.
Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W. F.,
and Weiss, B. (2005). A Database of German Emo-
tional Speech. In Proc. Interspeech, pages 1517–
1520.
Eyben, F., W¨ollmer, M., and Schuller, B. (2009). Open-
ear - introducing the munich open-source emotion and
affect recognition toolkit. In 2009 3rd International
Conference on Affective Computing and Intelligent In-
teraction and Workshops, pages 1–6.
Huang, G.-B., Zhou, H., Ding, X., and Zhang, R. (2012).
Extreme Learning Machine for Regression and Mul-
ticlass Classification. IEEE Transactions on Sys-
tems, Man, and Cybernetics, Part B (Cybernetics),
42(2):513–529.
Johnson, W. B. and Lindenstrauss, J. (1984). Extensions of
Lipschitz mappings into a Hilbert space. In Confer-
ence in modern analysis and probability, volume 26,
pages 189–206.
Martin, O., Kotsia, I., Macq, B., and Pitas, I. (2006). The
eNTERFACE’05 Audio-Visual Emotion Database. In
22nd International Conference on Data Engineering
Workshops (ICDEW’06), pages 1–8. IEEE.
Schuller, B., Steidl, S., and Batliner, A. (2009a). The IN-
TERSPEECH 2009 Emotion Challenge. In Proc. In-
terspeech, pages 312–315.
Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Dev-
illers, L., Christian, M., and Narayanan, S. (2010).
The INTERSPEECH 2010 Paralinguistic Challenge.
In Proc. Interspeech, pages 2794–2797.
Schuller, B., Steidl, S., Batliner, A., N¨oth, E., Vinciarelli,
A., Burkhardt, F., van Son, R., Weninger, F., Eyben,
F., Bocklet, T., Mohammadi, G., and Weiss, B. (2012).
The interspeech 2012 speaker trait challenge. In Proc.
Interspeech, pages 254–257.
Schuller, B., Vlasenko, B., Eyben, F., Rigoll, G., and
Wendemuth, A. (2009b). Acoustic emotion recogni-
tion: A benchmark comparison of performances. In
2009 IEEE Workshop on Automatic Speech Recogni-
tion Understanding, pages 552–557.
Steidl, S. (2009). Automatic Classification of Emotion-
Related User States in Spontaneous Children’s
Speech. PhD thesis, Technische Fakult¨at der Univer-
sit¨at Erlangen-N¨urnberg.
Steininger, S., Rabold, S., Dioubina, O., and Schiel, F.
(2002). Development of the user-state conventions for
the multimodal corpus in smartkom. In Proc. of the
3rd Int. Conf. on Language Resources and Evaluation,
Workshop on Multimodal Resources and Multimodal
Systems Evaluation, pages 33–37.
Stuhlsatz, A., Meyer, C., Eyben, F., Zielke, T., Meier,
G., and Schuller, B. (2011). Deep neural networks
for acoustic emotion recognition: Raising the bench-
marks. In 2011 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP),
pages 5688–5691.
Uzair, M., Shafait, F., Ghanem, B., and Mian, A. (2016).
Representation learning with deep extreme learning
machines for efficient image set classification. Neu-
ral Computing and Applications, pages 1–13.