that the sentence uses antithesis and some of the
negative words are normally associated with Q2 (e.g.,
blood, pain).
Another example which can explain the amount
of sentences from Q2 erroneously classified in Q3
and consequently imply a low precision for Q3, is the
sentence “Shut up when I’m talking to you, shut up,
shut up, shut up, shut up when I’m talking to you, shut
up, shut up, shut up, I’m about to break”. This
sentence has a predominance of the word shut, and
our system has the limitation of not recognizing
phrasal verbs (e.g., shut up – more associated with
Q2) and the verb shut is associated with Q3,
according to DAL. We will address this issue in our
future work.
We cannot directly compare the results to other
works, because the datasets are different and ours is
only one composed by sentences from lyrics that we
are aware (the others are composed by other types of
text, such as children stories and less subjective text
such as journalistic text). Nevertheless the results
seem promising in comparison with approaches using
machine learning for complete song lyrics, e.g.,
73.6% F-measure in another work from our team
(Malheiro et al., 2016).
5 CONCLUSIONS
This research addresses the role of the lyrics in the
context of music emotion variation detection. To
accomplish this task we created a system to detect the
predominant emotion expressed by each sentence
(verse) of the lyrics, using a use a keyword-based
approach, which receives a sentence (verse) and
classifies it in the appropriate quadrant, according to
Russell’s emotion model. To validate our system, we
created a training set containing 129 verses and a
validation set with 239, annotated manually with an
average of 7 annotations per sentence. We attained
67.4% F-measure performance.
The main contributions of our work are the KBA
methodology proposed, as well as the ground-truth of
sentences created. In the future, we intend to improve
our methodology including the improvement of the
ED dictionary and a mechanism to detect beforehand
if the sentence is emotional or non-emotional.
Moreover, we intend to study emotion variation
detection along the lyric to understand the importance
of the different structures (e.g. chorus) along the lyric.
Additionally, we intend to make music emotion
variation detection in a bimodal scenario, including
audio and lyrics. This implies an audio-lyrics
alignment.
ACKNOWLEDGEMENTS
This work was supported by CISUC (Center for
Informatics and Systems of the University of
Coimbra).
REFERENCES
Agrawal, A., An, A., 2012. Unsupervised Emotion
Detection from Text using Semantic and Syntactic
Relations. In Proceedings of the 2012 IEEE/WIC/ACM
International Joint Conferences on Web Intelligence
and Intelligent Agent Technology, 346-353.
Aman, S., Szpakowicz, S. 2007. Identifying Expressions of
Emotion in Text. In Proceedings 10th International
Conference on Text, Speech and Dialogue TSD 2007,
Plzen, Czech Republic, Lecture Notes in Computer
Science 4629, Springer, pp. 196-205.
Besson, M., Faita, F., Peretz, I., Bonnel, A., Requin, J.
1998. Singing in the brain: Independence of lyrics and
tunes, Psychological Science, 9.
Binali, H., Wu, C., Potdar, V. 2010. Computational
Approaches for Emotion Detection in Text. 4th IEEE
International Conference on Digital Ecosystems and
Technologies.
Bradley, M., Lang, P. 1999. Affective Norms for English
Words (ANEW): Stimuli, Instruction Manual and
Affective Ratings. Technical report C-1, The Center for
Research in Psychophysiology, University of Florida.
Chopade, C. 2015. Text based Emotion Recognition.
International Journal of Science and Research (IJSR),
4(6), 409-414.
Chunling, M., Prendinger, H., Ishizuka, M. 2005. Emotion
Estimation and Reasoning Based on Affective Textual
Interaction. In Affective Computing and Intelligent
Interaction, Vol. 3784/2005: Springer Berlin /
Heidelberg, pp. 622-628.
Fontaine, J., Scherer, K., Soriano, C. 2013. Components of
Emotional Meaning. A Sourcebook. Oxford University
Press.
Hancock, J., Landrigan, C., Silver, C. 2007. Expressing
emotions in text-based communication. In Proceedings
of the SIGCHI conference on Human factors in
computing systems, pp. 929-932.
Hevner, K. 1936. Experimental studies of the elements of
expression in music. American Journal of Psychology,
48: 246-268.
Hu, Y., Chen, X., Yang, D. 2009. Lyric-Based Song
Emotion Detection with Affective Lexicon and Fuzzy
Clustering Method. Tenth Int. Society for Music
Information Retrieval Conference.
Hu, X., Downie, J. 2010. Improving mood classification in
music digital libraries by combining lyrics and audio.
Proc. Tenth Ann. joint conf. on Digital libraries, pp.
159-168.
Juslin, P., Laukka, P. 2004. Expression, Perception, and
Induction of Musical Emotions: A Review and a