From the experiments, it can be seen, as well as 
for the state of the art, that the error on Valence is 
much higher than the error on Arousal. 
Recalling that the annotations that it makes 
reference to for training the models are provided by 
individuals, in the light of the experiments, it can be 
inferred that people are more able to distinguish the 
level of activation of the music (arousal) rather than 
the negative or positive mood contained in the song 
(valence). The best result has been obtained with the 
experiment 4, getting slightly better outcomes than 
the state of the art presented in the task of MediaEval 
2013. 
Wanting to advance criticisms to this dataset, 
looking at the distribution of records within the 
graphical representation, it can be observed that, on 
the basis of the description of the moods according to 
Russell, the number of audio tracks that have a high 
value and a low arousal (RELAXED) are few. In the 
future, a more balanced set will be adopted. 
REFERENCES 
Juslin, P. N. & Laukka, P., 2004, Expression, Perception, 
and Induction of Musical Emotions: A Review and a 
Questionnaire Study of Everyday Listening, Journal of 
New Music Research, 33 (3), 217–238. 
Krumhansl, C. L., 1997, An exploratory study of musical 
emotions and psychophysiology. Canadian journal of 
experimental psychology, 51 (4), 336–353. 
Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N., 
2001,  A developmental study of the affective value of 
tempo and mode in music, Cognition, 80 (3), 1–10. 
Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., 
Beckett, C., Baulac, M., & Samson, S., 2005, Impaired 
recognition of scary music following unilateral 
temporal lobe excision, Brain, 128 (3), 628–640 
Laurier, C. & Herrera, P., 2007, Audio music mood 
classification using support vector machine. In 
Proceedings of the 8th International Conference on 
Music Information Retrieval. Vienna, Austria. 
Lu, L., Liu, D., & Zhang, H.-J., 2007, Automatic mood 
detection and tracking of music audio signals. Audio, 
Speech, and Language Processing, IEEE Transactions 
on, 14 (1), 5–18.  
Shi, Y.-Y., Zhu, X., Kim, H.-G., & Eom, K.-W., 2006, A 
Tempo Feature via Modulation Spectrum Analysis and 
its Application to Music Emotion Classification. In 
Proceedings of the IEEE International Conference on 
Multimedia and Expo, pp. 1085–1088.  
Wieczorkowska, A., Synak, P., Lewis, R., & Raś, 2005, 
Extracting Emotions from Music Data. In M.-S. Hacid, 
N. V. Murray, Z. W. Raś, & S. Tsumoto (Eds.) 
Foundations of Intelligent Systems, Lecture Notes in 
Computer Science, vol. 3488, chap. 47, pp. 456–465. 
Berlin, Heidelberg: Springer-Verlag. 
Li, T., Ogihara, M., 2003, 'Detecting emotion in music', 
paper presented to Proceedings of the International 
Symposium on Music Information Retrieval, 
Washington D.C., USA. 
Farnsworth, P. R., 1954, A study of the Hevner adjective 
list. The Journal of Aesthetics and Art Criticism, 13 (1), 
97–103. 
Skowronek, J., McKinney, M., & van de Par, S., 2007, A 
Demonstrator for Automatic Music Mood Estimation. 
In Proceedings of the 8th International Conference on 
Music Information Retrieval, pp. 345–346. Vienna, 
Austria. 
Thayer, R. E. (1989). The biopsychology of mood and 
arousal. Oxford: Oxford University Press. 
Thayer, R. E. (1996). The Origin of Everyday Moods: 
Managing Energy, Tension, and Stress. Oxford: Oxford 
University Press. 
Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H., 2008, A 
Regression Approach to Music Emotion Recognition. 
IEEE Transactions on Audio, Speech, and Language 
Processing, 16 (2), 448–457. 
Yang, Y. H. & Chen, H., 2010, Ranking-Based Emotion 
Recognition for Music Organization and Retrieval. 
IEEE Transactions on Audio, Speech, and Language 
Processing, 487–497 
Eerola, T., Lartillot, O., & Toiviainen, P. (2009). Prediction 
of Multidimensional Emotional Ratings in Music from 
Audio using Multivariate Regression Models. In 
Proceedings of ISMIR 2009, pp. 621–626. 
 Mohammad Soleymani, Micheal N. Caro, Erik M. 
Schmidt, Cheng-Ya Sha, and Yi-Hsuan Yang, 2013, 
1000 songs for emotional analysis of music, 
Proceedings of the 2Nd ACM International Workshop 
on Crowdsourcing for Multimedia (New York, NY, 
USA), CrowdMM ’13, ACM, 2013, pp. 1–6. 
Luís Cardoso, Renato Panda and Rui Pedro Paiva, 2011, 
“MOODetector: A Prototype Software Tool for Mood-
based Playlist Generation” Department of Informatics 
Engineering, University of Coimbra – Pólo II, Coimbra, 
Portugal. 
Anna Aljanaki, Frans Wiering, Remco C. Veltkamp: 
“MIRUtrecht participation in MediaEval 2013: 
Emotion in Music task” Utrecht University,  
Princetonplein 5, Utrecht 3584CC {A.Aljanaki@uu.nl, 
F.Wiering@uu.nl, R.C.Veltkamp@uu.nl}