An Approach for Sentiment Classification of Music

Francesco Colace, Luca Casaburi

2016

Abstract

In recent years, the music recommendation systems and dynamic generation of playlists have become extremely promising research areas. Thanks to the widespread use of the Internet, users can store a consistent set of music data and use them in the everyday context thanks to portable music players. The problem of modern music recommendation systems is how to process this large amount of data and extract meaningful content descriptors. The aim of this paper is to compare different approaches to decode the content within the mood of a song and to propose a new set of features to be considered for classification.

References

  1. Juslin, P. N. & Laukka, P., 2004, Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening, Journal of New Music Research, 33 (3), 217-238.
  2. Krumhansl, C. L., 1997, An exploratory study of musical emotions and psychophysiology. Canadian journal of experimental psychology, 51 (4), 336-353.
  3. Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N., 2001, A developmental study of the affective value of tempo and mode in music, Cognition, 80 (3), 1-10.
  4. Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., Baulac, M., & Samson, S., 2005, Impaired recognition of scary music following unilateral temporal lobe excision, Brain, 128 (3), 628-640
  5. Laurier, C. & Herrera, P., 2007, Audio music mood classification using support vector machine. In Proceedings of the 8th International Conference on Music Information Retrieval. Vienna, Austria.
  6. Lu, L., Liu, D., & Zhang, H.-J., 2007, Automatic mood detection and tracking of music audio signals. Audio, Speech, and Language Processing, IEEE Transactions on, 14 (1), 5-18.
  7. Shi, Y.-Y., Zhu, X., Kim, H.-G., & Eom, K.-W., 2006, A Tempo Feature via Modulation Spectrum Analysis and its Application to Music Emotion Classification. In Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 1085-1088.
  8. Wieczorkowska, A., Synak, P., Lewis, R., & Ras, 2005, Extracting Emotions from Music Data. In M.-S. Hacid, N. V. Murray, Z. W. Ras, & S. Tsumoto (Eds.) Foundations of Intelligent Systems, Lecture Notes in Computer Science, vol. 3488, chap. 47, pp. 456-465. Berlin, Heidelberg: Springer-Verlag.
  9. Li, T., Ogihara, M., 2003, 'Detecting emotion in music', paper presented to Proceedings of the International Symposium on Music Information Retrieval, Washington D.C., USA.
  10. Farnsworth, P. R., 1954, A study of the Hevner adjective list. The Journal of Aesthetics and Art Criticism, 13 (1), 97-103.
  11. Skowronek, J., McKinney, M., & van de Par, S., 2007, A Demonstrator for Automatic Music Mood Estimation. In Proceedings of the 8th International Conference on Music Information Retrieval, pp. 345-346. Vienna, Austria.
  12. Thayer, R. E. (1989). The biopsychology of mood and arousal. Oxford: Oxford University Press.
  13. Thayer, R. E. (1996). The Origin of Everyday Moods: Managing Energy, Tension, and Stress. Oxford: Oxford University Press.
  14. Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H., 2008, A Regression Approach to Music Emotion Recognition. IEEE Transactions on Audio, Speech, and Language Processing, 16 (2), 448-457.
  15. Yang, Y. H. & Chen, H., 2010, Ranking-Based Emotion Recognition for Music Organization and Retrieval. IEEE Transactions on Audio, Speech, and Language Processing, 487-497
  16. Eerola, T., Lartillot, O., & Toiviainen, P. (2009). Prediction of Multidimensional Emotional Ratings in Music from Audio using Multivariate Regression Models. In Proceedings of ISMIR 2009, pp. 621-626.
  17. Mohammad Soleymani, Micheal N. Caro, Erik M. Schmidt, Cheng-Ya Sha, and Yi-Hsuan Yang, 2013, 1000 songs for emotional analysis of music, Proceedings of the 2Nd ACM International Workshop on Crowdsourcing for Multimedia (New York, NY, USA), CrowdMM 7813, ACM, 2013, pp. 1-6.
  18. Luís Cardoso, Renato Panda and Rui Pedro Paiva, 2011, “MOODetector: A Prototype Software Tool for Moodbased Playlist Generation” Department of Informatics Engineering, University of Coimbra - Pólo II, Coimbra, Portugal.
  19. Anna Aljanaki, Frans Wiering, Remco C. Veltkamp: “MIRUtrecht participation in MediaEval 2013: Emotion in Music task” Utrecht University, Princetonplein 5, Utrecht 3584CC {A.Aljanaki@uu.nl, F.Wiering@uu.nl, R.C.Veltkamp@uu.nl}
Download


Paper Citation


in Harvard Style

Colace F. and Casaburi L. (2016). An Approach for Sentiment Classification of Music . In Proceedings of the 18th International Conference on Enterprise Information Systems - Volume 1: ICEIS, ISBN 978-989-758-187-8, pages 421-426. DOI: 10.5220/0005826504210426


in Bibtex Style

@conference{iceis16,
author={Francesco Colace and Luca Casaburi},
title={An Approach for Sentiment Classification of Music},
booktitle={Proceedings of the 18th International Conference on Enterprise Information Systems - Volume 1: ICEIS,},
year={2016},
pages={421-426},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005826504210426},
isbn={978-989-758-187-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 18th International Conference on Enterprise Information Systems - Volume 1: ICEIS,
TI - An Approach for Sentiment Classification of Music
SN - 978-989-758-187-8
AU - Colace F.
AU - Casaburi L.
PY - 2016
SP - 421
EP - 426
DO - 10.5220/0005826504210426