Skip to main content

Unifying Performer and Accompaniment

  • Conference paper
Computer Music Modeling and Retrieval (CMMR 2005)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3902))

Included in the following conference series:

  • 648 Accesses

Abstract

A unique real time system for correlating a vocal, musical performance to an electronic accompaniment is presented. The system has been implemented and tested extensively in performance in the author’s opera ‘La Quintrala’, and experience with its use in practice is presented. Furthermore, the system’s functionality is outlined, it is put into current research perspective, and its possibilities for further development and other usages is discussed. The system correlates voice analysis to an underlying chord structure, stored in computer memory. This chord structure defines the primary supportive pitches, and links the notated and electronic score together, addressing the needs of the singer for tonal ‘indicators’ at any given moment. A computer-generated note is initiated by a combination of the singer – by the onset of a note, or by some element in the continuous spectrum of the singing – and the computer through an accompaniment algorithm. The evolution of this relationship between singer and computer is predefined in the application according to the structural intentions of the score, and is affected by the musical and expressive efforts of the singer. The combination of singer and computer influencing the execution of the accompaniment creates a dynamic, musical interplay between singer and computer, and is a very fertile musical area for a composer’s combined computer programming and score writing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cai, R., Lu, L., Zhang, H.-J., Cai, L.-H.: Improve Audio Representation by Using Feature Structure Patterns. In: ICASSP 2004 (2004)

    Google Scholar 

  2. Camurri, A., Trocca, R., Volpe, G.: Interactive systems design: A KANSEI-based approach. In: Proceedings of NIME 2002, Dublin, Ireland (2002)

    Google Scholar 

  3. Camurri, A., Dillon, R., Saron, A.: An Experiment on Analysis and Synthesis of Musical Expressivity. In: Proceedings of the XIII Colloquium on Music Informatics, L’Aquila, Italy (2000)

    Google Scholar 

  4. Canazza, S., Rodá, A.: Adding Expressiveness in Musical Performance in Real Time. In: Proceedings of the AISB 1999, Symposium on Musical Creativity, Edingburgh, Scotland, pp. 134–139 (1999)

    Google Scholar 

  5. Canazza, S., De Poli, G., Rodá, A., Vidolin, A., Zanon, P.: Kinematics-energy space for expressive interaction in music performance. In: Proceedings of MOSART, Workshop on current research directions in Computer Music, Barcelona, Spain, pp. 35–40 (2001)

    Google Scholar 

  6. Canazza, S., De Poli, G., Rodá, A., Soleni, G., Zanon, P.: Real Time Analysis of Expressive Contents in Piano Performances. In: Proceedings of the 2002 International Computer Music Conference, Gothenburg, Sweden, pp. 414–418 (2002)

    Google Scholar 

  7. Cano, P., Loscos, A., Bonada, J.: Score-performance matching using HMMs. In: Proceedings of the 1999 International Computer Music Conference, Beijing, China (1999)

    Google Scholar 

  8. Choi, I.: Interactivity vs. control: human-machine performance basis of emotion. In: Proceedings of 1997 ‘KANSEI - The Technology of Emotion Workshop’, pp. 24–35 (1997)

    Google Scholar 

  9. Dannenberg, R., Mukaino, H.: New Techniques for Enhanced Quality of Computer Accompaniment. In: Proceedings of the 1988 International Computer Music Conference, Cologne, Germany, pp. 243–249 (1988)

    Google Scholar 

  10. Dannenberg, R.: An On-line Algorithm for Real-Time Accompaniment. In: Proceedings of the 1984 International Computer Music Conference, Paris, France, pp. 193–198 (1984)

    Google Scholar 

  11. Forte, A.: The Structure of Atonal Music. Yale University Press, New Haven, London (1973)

    Google Scholar 

  12. Freescale Semiconductor: Altivec technology, http://www.simdtech.org/altivec

  13. Friberg, A., Schoonderwaldt, E., Juslin, P.N., Bresin, R.: Automatic Real-Time Extraction of Musical Expression. In: Proceedings of the 2002 International Computer Music Conference, Gothenburg, Sweden, pp. 365–367 (2002)

    Google Scholar 

  14. Funkhauser, T., Jot, J.-M., Tsingos, N.: Sounds Good to Me. In: SIGGRAPH 2002 Course Notes (2002)

    Google Scholar 

  15. Grubb, L., Dannenberg, R.: A stochastic method of tracking a vocal performer. In: Proceedings of the 1997 International Computer Music Conference, Thessaloniki, Greece, pp. 301–308 (1997)

    Google Scholar 

  16. Graugaard, L.: La Quintrala. Opera for five singers and interactive, computer generated accompaniment. Premiered September 2, 2004 at Den Anden Opera, Copenhagen, Denmark (2003-2004)

    Google Scholar 

  17. Hanna, P., Desainte-Catherine, M.: Detection of sinusoidal components in sounds using statistical analysis of intensity fluctuations. In: Proceedings of the 2002 International Computer Music Conference, Gothenburg, Sweden, pp. 100–103 (2002)

    Google Scholar 

  18. Hashimoto, S.: KANSEI as the Third Target of Information Processing and Related Topics in Japan. In: Camurri, A. (ed.) Proceedings of the International Workshop on KANSEI: The Technology of Emotion, pp. 101–104 (1997)

    Google Scholar 

  19. Inoue, W., Hashimoto, S., Ohteru, S.: A computer music system for human singing. In: Proceedings of the 1993 International Computer Music Conference, Tokyo, Japan, pp. 150–153 (1993)

    Google Scholar 

  20. Inoue, W., Hashimoto, S., Ohteru, S.: Adaptive karaoke system—human singing accompaniment based on speech recognition. In: Proceedings of the 1994 International Computer Music Conference, Århus, Denmark, pp. 70–77 (1994)

    Google Scholar 

  21. Jehan, T., Schoner, B.: An Audio-Driven Perceptually Meaningful Timbre Synthesizer. In: Proceedings of the 2001 International Computer Music Conference, Havana, Cuba (2001)

    Google Scholar 

  22. Katayose, H., Kanamori, T., Kamei, K., Nagashima, Y., Sato, K., Inokuchi, S., Simura, S.: Virtual performer. In: Proceedings of the 1993 International Computer Music Conference, Tokyo, Japan, pp. 138–145 (1993)

    Google Scholar 

  23. Langer, S.K.: Philosophy in a New Key. Harvard University Press, New American Library, New York

    Google Scholar 

  24. Lawter, J., Moon, B.: Score following in open form compositions. In: Proceedings of the 1998 International Computer Music Conference, San Francisco, USA (1998)

    Google Scholar 

  25. Lippe, C.: A Look at Performer/Machine Interaction Using Real-Time Systems. In: Proceedings of the 1996 International Computer Music Conference, Hong Kong, pp. 116–117 (1996)

    Google Scholar 

  26. Lippe, C.: Real-Time Interaction Among Composers, Performers, and Computer Systems. Information Processing Society of Japan SIG Notes 2002(123), 1–6 (2002)

    Google Scholar 

  27. Martin, K.D., Scheirer, E.D., Vercoe, B.L.: Music Content Analysis through Models of Audition. In: ACMMM 1998 (1998)

    Google Scholar 

  28. Orio, N., Lemouton, S., Schwarz, D.: Score following: state of the art and new developments. In: Proceedings of the 2003 Conference on New Interfaces for Musical Expression, Montreal, Canada (2003)

    Google Scholar 

  29. Puckette, M., Lippe, C.: Score following in practise. In: Proceedings of the 1992 International Computer Music Conference, pp. 182–185 (1992)

    Google Scholar 

  30. Puckette, M.: Score following using the sung voice. In: Proceedings of the 1995 International Computer Music Conference, Banff, Canada (1995)

    Google Scholar 

  31. Puckette, M., Apel, T., Zicarelli, D.: Real-time audio analysis tools for Pd and MSP. In: Proceedings of the 1998 International Computer Music Conference, San Francisco, USA, pp. 109–112 (1998)

    Google Scholar 

  32. Roebel, A., Zivanoviz, M., Rodet, X.: Signal decomposition by means of classification of spectral objects. In: Proceedings of the 2004 International Computer Music Conference, Florida, USA, pp. 446–449 (2004)

    Google Scholar 

  33. Rowe, R.: Machine Musicianship, pp. 191–193. MIT Press, Cambridge (2001), ISBN 0-262-68149-8

    Google Scholar 

  34. Vercoe, B.: The Synthetic Performer in the Context of Live Performance. In: Proceedings of the 1988 International Computer Music Conference, Paris, France, pp. 199–200 (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Graugaard, L. (2006). Unifying Performer and Accompaniment. In: Kronland-Martinet, R., Voinier, T., Ystad, S. (eds) Computer Music Modeling and Retrieval. CMMR 2005. Lecture Notes in Computer Science, vol 3902. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11751069_16

Download citation

  • DOI: https://doi.org/10.1007/11751069_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34027-0

  • Online ISBN: 978-3-540-34028-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics