Abstract
A method for expressive melody synthesis is presented seeking to capture the prosodic (stress, direction, and grouping) element of musical interpretation. An expressive performance is represented as a note-level annotation, classifying each note according to a small alphabet of symbols describing the role of the note within a larger context. An audio performance of the melody is represented in terms of two functions describing the time-evolving frequency and intensity. A method is presented that transforms the expressive annotation into the frequency and intensity functions, thus giving the audio performance. The problem of expressive rendering is then cast as estimation of the most likely sequence of hidden variables corresponding to the prosodic annotation. Examples are presented on a dataset of around 50 folk-like melodies, realized both from hand-marked and estimated annotations.
This work supported by NSF grants IIS-0739563 and IIS-0812244.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Goebl, W., Dixon, S., De Poli, G., Friberg, A., Bresin, R., Widmer, G.: Sense in expressive music performance: Data acquisition, computational studies, and models, ch. 5, pp. 195–242. Logos Verlag, Berlin (2008)
Widmer, G., Goebl, W.: Computational models for expressive music performance: The state of the art. Journal of New Music Research 33(3), 203–216 (2004)
Todd, N.P.M.: The kinematics of musical expression. Journal of the Acoustical Society of America 97(3), 1940–1949 (1995)
Widmer, G., Tobudic, A.: Playing Mozart by analogy: Learning multi-level timing and dynamics strategies. Journal of New Music Research 33(3), 203–216 (2003)
Hiraga, R., Bresin, R., Hirata, K., Katayose, H.: Rencon 2004: Turing Test for musical expression. In: Proceedings of the 2004 Conference on New Interfaces for Musical Expression (NIME 2004), pp. 120–123 (2004)
Hashida, Y., Nakra, T., Katayose, H., Murao, Y.: Rencon: Performance Rendering Contest for Automated Music Systems. In: Proceedings of the 10th Int. Conf. on Music Perception and Cognition (ICMPC 10), Sapporo, Japan, pp. 53–57 (2008)
Sundberg, J.: The KTH synthesis of singing. Advances in Cognitive Psychology. Special issue on Music Performance 2(2-3), 131–143 (2006)
Friberg, A., Bresin, R., Sundberg, J.: Overview of the KTH rule system for musical performance. Advances in Cognitive Psychology 2(2-3), 145–161 (2006)
Roads, C.: The Computer Music Tutorial. MIT Press, Cambridge (1996)
Palmer, C.: Music Performance. Annual Review Psychology 48, 115–138 (1997)
Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth and Brooks, Monterey (1984)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Raphael, C. (2009). Representing and Estimating Musical Expression in Melody. In: Chew, E., Childs, A., Chuan, CH. (eds) Mathematics and Computation in Music. MCM 2009. Communications in Computer and Information Science, vol 38. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02394-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-02394-1_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02393-4
Online ISBN: 978-3-642-02394-1
eBook Packages: Computer ScienceComputer Science (R0)