New developments in content processing (e.g.
subtitles, lyrics, quotes, and audio) and emotional
impact (automatic or based on self assessment) could
also enrich and automatize further finding relations
that contribute to an increased comprehension of
these contents. In section 3 we already exemplified
some directions mainly with subtitles (for enriched
and multiple quotes) and emotions. One of our goals
is also to reach a unified model for the emotions that
are relevant in the context of music and movies. In
this proof of concept, we are using two sources of
classification for quotes and music (Parallel Dots and
Spotify) with different models. The representation of
the emotions in the same circumpex, based on arousal
and valence (Fig.2h) is already going in the direction
of a coherent unified model and representation, and
aligned with our research in content and user emotion
detection (Chambel et al., 2013; Oliveira et al., 2013;
Bernardino et al., 2016).
Different modalities and contexts of use could
also be taken into account to access information in a
richer and more flexible way, possibly mediated by
conversational and intelligent agents. For example,
identifying a music that is playing, or what a character
is saying in the movie being watched, to direct the
users to the corresponding information, to other
content related to this one and the situation that they
are living in the moment.
Regarding quotes, and as a complement to the
automatic detection of the underlying emotions, users
could identify in their perspectives the emotions they
associate to them (what they feel and makes them
memorable and valuable), and quotes (from movies
and songs) could be suggested or collected in
personal journals as inspirational sources, aligned
with the more recent developments in (Chambel and
Carvalho, 2020). Designs for quotes in the Quote
View (Fig1.g) and in users’ personal journals could
be automatically created based on colors of the movie
scenes and emotions conveyed, in a similar approach
to (Kim and Suk, 2016), or in styles created or
selected by the users for inspiration and self
expression contexts (Nave et al., 2016).
ACKNOWLEDGEMENTS
This work was partially supported by FCT through
funding of the AWESOME project, ref. PTDC/CCI/
29234/2017, and LASIGE Research Unit, ref. UIDB
/00408/2020.
REFERENCES
Ali, S. O. and Peynircioğlu, Z. F., 2006. Songs and
emotions: are lyrics and melodies equal partners?,
Psychology of Music, 34(4), pp. 511–534.
Bernardino, C., Ferreira, H.A., and Chambel, T. 2016.
Towards Media for Wellbeing. In Proc. of ACM TVX'
2016, ACM. 171-177.
Chambel, T. 2011. Towards Serendipity and Insights in
Movies and Multimedia. In Proc. of International
Workshop on Encouraging Serendipity in Interactive
Systems. Interact'2011. 12-16.
Chambel, T. and Carvalho, P., 2020. Memorable and
Emotional Media Moments: reminding yourself of the
good things!.In Proceedings of VISIGRAPP 2020
(HUCAPP: International Conference on Human Com-
puter Interaction Theory and Applications), 13 pgs.
Chambel, T., Langlois, T., Martins, P., Gil, N., Silva, N.,
Duarte, E., 2013. Content-Based Search Overviews and
Exploratory Browsing of Movies with Movie-Clouds.
International Journal of Advanced Media and
Communication, 5(1): 58-79.
Chou, H. Y., & Lien, N. H., 2010. Advertising effects of
songs' nostalgia and lyrics' relevance. Asia Pacific
Journal of Marketing and Logistics, 22.3: 314-329.
Condit-Schultz, N., and Huron, D., 2015. Catching the
lyrics: intelligibility in twelve song genres. Music
Perception: An Interdisciplinary Journal, 32.5: 470-
483.
Danescu-Niculescu-Mizil, C., Cheng, J., Kleinberg, J., Lee,
L., 2012. You had me at hello: How phrasing affects
memorability. In Proceedings of the 50th Annual
Meeting of the Association for Computational
Linguistics: Long Papers-Volume 1 (pp. 892-901).
Association for Computational Linguistics.
Dickens, E., 1998. Correlating Teenage Exposure to
Rock/Rap Themes with Associated Behaviors and
Thought Patterns.
Ekman, P., 1992. Are there basic emotions? Psychological
Review, 99(3):550-553.
Flintlock, S., 2017. The Importance of Song Lyrics: why
lyrics matter in songs, Beat, Vocal Media. https://
beat.media/the-importance-of-song-lyrics
Gen-ref: Genius API. https://docs.genius.com/
Hassenzahl, M., Platz, A., Burmester, M, Lehner, K., 2000.
Hedonic and Ergonomic Quality Aspects Determine a
Software’s Appeal. ACM CHI 2000. The Hague,
Amsterdam, pp.201-208.
Hu, X., Downie, J. S., Ehmann, A. F., 2009. Lyric text
mining in music mood classification. American music,
183 (5,049), 2-209.
Jenkins, T., 2014. Why does music evoke memories?,
Culture, BBC. http://www.bbc.com/culture/story/
20140417-why-does-music-evoke-memories
Juslin, P. N., Vastfjall, D., 2008. Emotional responses to
music: The need to consider underlying mechanisms.
Behavioral and Brain Sciences, 31(5), 559-575.
Kim, E., Suk, H. J., 2016. Key Color Generation for
Affective Multimedia Production: An Initial Method
HUCAPP 2020 - 4th International Conference on Human Computer Interaction Theory and Applications