Abstract
One of the main tasks of any work of art is transferring emotion conceived by the author to its recipient. When using several modalities a synergistic effect occurs, making the achievement of the target emotional state more likely. In reading, mostly, visual perception is involved, nevertheless, we can supplement it with an audio modality with the soundtrack’s help via specially selected music that corresponds to the emotional state of a text fragment.
As a base model for representing emotional state we have selected physiologically motivated Lövheim’s cube model which embraces 8 emotional states instead of 2 (positive and negative) usually used in sentiment analysis.
This article describes the concept of selecting special music for the “mood” of a text extract by mapping text emotional labels to tags in LastFM API, fetching music data to play and experimental validation of this approach.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Su, F., Markert, K.: From words to senses: a case study in subjectivity recognition (PDF). In: Proceedings of Coling 2008, Manchester, UK (2008)
Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of KDD 2004 (2004)
Kim, S.M., Hovy, E.H.: Identifying and analyzing judgment opinions (PDF). In: Proceedings of the Human Language Technology/North American Association of Computational Linguistics conference (HLT-NAACL 2006), New York, NY (2006)
Mehrabian, A.: Basic dimensions for a general psychological theory, pp. 39–53 (1980)
Lance, B., et al.: Relation between gaze behavior and attribution of emotion. In: Prendinger, H. (ed.) Intelligent Virtual Agents: 8th International Conference IVA, pp. 1–9 (2008)
Lövheim, H.: A new three-dimensional model for emotions and monoamine neurotransmitters. Med. Hypotheses 78, 341–348 (2012)
Talanov, M., Toschev, A.: Computational emotional thinking and virtual neurotransmitters. Int. J. Synth. Emotions 5(1), 1–8
Affective Text: data annotated for emotions and polarity. Dataset. http://web.eecs.umich.edu/~mihalcea/downloads.html#affective
EmoBank. 10 k sentences annotated with Valence, Arousal and Dominance values. Dataset. https://github.com/JULIELab/EmoBank
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kalinin, A., Kolmogorova, A. (2019). Automated Soundtrack Generation for Fiction Books Backed by Lövheim’s Cube Emotional Model. In: Eismont, P., Mitrenina, O., Pereltsvaig, A. (eds) Language, Music and Computing. LMAC 2017. Communications in Computer and Information Science, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-05594-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-05594-3_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05593-6
Online ISBN: 978-3-030-05594-3
eBook Packages: Computer ScienceComputer Science (R0)