Authors:
Ricardo Malheiro
1
;
Renato Panda
2
;
Paulo Gomes
2
and
Rui Pedro Paiva
2
Affiliations:
1
Center for Informatics and Systems of the University of Coimbra (CISUC) and Miguel Torga Higher Institute, Portugal
;
2
Center for Informatics and Systems of the University of Coimbra (CISUC), Portugal
Keyword(s):
Music Information Retrieval, Lyrics Music Emotion Recognition, Lyrics Music Classification, Lyrics Music Regression, Lyrics Feature Extraction.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Information Extraction
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Mining Text and Semi-Structured Data
;
Symbolic Systems
Abstract:
This research addresses the role of lyrics in the music emotion recognition process. Our approach is based
on several state of the art features complemented by novel stylistic, structural and semantic features. To
evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s
emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal
and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features,
including novel features, improved the F-measure from 68.2%, 79.6% and 84.2% to 77.1%, 86.3% and
89.2%, respectively for the three classification experiments. To study the relation between features and
emotions (quadrants) we performed experiments to identify the best features that allow to describe and
discriminate between arousal hemispheres and valence meridians. To further validate these experiments, we
built a validation set comprising 771 lyrics extr
acted from the AllMusic platform, having achieved 73.6% F-
measure in the classification by quadrants. Regarding regression, results show that, comparing to similar
studies for audio, we achieve a similar performance for arousal and a much better performance for valence.
(More)