Skip to main content

Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation

  • Chapter
  • First Online:

Abstract

In this chapter, I describe how supervised learning algorithms can be used to build new digital musical instruments. Rather than merely serving as methods for inferring mathematical relationships from data, I show how these algorithms can be understood as valuable design tools that support embodied, real-time, creative practices. Through this discussion, I argue that the relationship between instrument builders and instrument creation tools warrants closer consideration: the affordances of a creation tool shape the musical potential of the instruments that are built, as well as the experiences and even the creative aims of the human builder. Understanding creation tools as “instruments” themselves invites us to examine them from perspectives informed by past work on performer-instrument interactions.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    In this chapter, I use the word “composer” to refer to people who build new musical instruments and create customized controller mappings, rather than referring to them as instrument builders or musicians. This word choice reflects an understanding of instrument building as an act of musical composition (cf. Schnell and Battier 2002, discussed above). It also accommodates the fact that there may not be a clear or consistent distinction between the notions of instrument, “preset” or mapping, and composition. For instance, at least two of the composers discussed here (Dan Trueman and Laetitia Sonami) have used the same controllers or sensors to play different musical pieces, but designed a different gesture-to-sound mapping for each piece.

References

  • Bencina, R. (2005). The metasurface: Applying natural neighbour interpolation to two-to-many mapping. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 101–104).

    Google Scholar 

  • Bevilacqua, F., Müller, R., & Schnell, N. (2005). MnM: A Max/MSP mapping toolbox. In Proceedings of the International Conference on New Interfaces for Musical Expression (pp. 85–88).

    Google Scholar 

  • Buxton, B. (2010). Sketching user experiences: Getting the design right and the right design. Morgan Kaufmann.

    Google Scholar 

  • Chadabe, J. (1997). Electric sound: The past and promise of electronic music. Upper Saddle River, New Jersey: Prentice Hall.

    Google Scholar 

  • Chadabe, J. (2002). The limitations of mapping as a structural descriptive in electronic instruments. In Proceedings of the International Conference on New Interfaces for Musical Expression .

    Google Scholar 

  • Drummond, J. (2009). Understanding interactive systems. Organised Sound, 14(2), 124–133.

    Article  Google Scholar 

  • Fails, J. A., & Olsen Jr, D. R. (2003). Interactive machine learning. In Proceedings of the International Conference on Intelligent User Interfaces (pp. 39–45).

    Google Scholar 

  • Fels, S. S., & Hinton, G. E. (1995). Glove-TalkII: An adaptive gesture-to-formant interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 456–463).

    Google Scholar 

  • Fiebrink, R. A., & Cook, P. R. (2011). Real-time human interaction with supervised learning algorithms for music composition and performance (doctoral dissertation). Princeton University, Princeton, NJ.

    Google Scholar 

  • Fiebrink, R., Cook, P. R., & Trueman, D. (2011). Human model evaluation in interactive supervised learning. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 147–156).

    Google Scholar 

  • Fiebrink, R., Trueman, D., Britt, C., Nagai, M., Kaczmarek, K., et al. (2010). Toward understanding human-computer interaction in composing the instrument. In Proceedings of the International Computer Music Conference.

    Google Scholar 

  • Fiebrink, R., Trueman, D., & Cook, P. R. (2009). A meta-instrument for interactive, on-the-fly machine learning. In Proceedings of the International Conference on New Interfaces for Musical Expression.

    Google Scholar 

  • Garnett, G., & Goudeseune, C. (1999). Performance factors in control of high-dimensional spaces. In Proceedings of the International Computer Music Conference (pp. 268–271).

    Google Scholar 

  • Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The Weka data mining software: An update. ACM SIGKDD Explorations Newsletter, 11(1), 10–18.

    Article  Google Scholar 

  • Hunt, A. & Kirk, R. (2000). Mapping strategies for musical performance. In M. M. Wanderley & M. Battier (Eds.). Trends in Gestural Control of Music. IRCAM—Centre Pompidou.

    Google Scholar 

  • Hunt, A., & Wanderley, M. M. (2002). Mapping performer parameters to synthesis engines. Organised Sound, 7(2), 97–108.

    Article  Google Scholar 

  • Hunt, A., Wanderley, M. M., & Paradis, M. (2002). The importance of parameter mapping in electronic instrument design. In Proceedings of the International Conference on New Interfaces for Musical Expression.

    Google Scholar 

  • Katan, S., Grierson, M., & Fiebrink, R. (2015). Using interactive machine learning to support interface development through workshops with disabled people. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 251–254).

    Google Scholar 

  • Lee, M., Freed, A., & Wessel, D. (1991). Real-time neural network processing of gestural and acoustic signals. In Proceedings of the International Computer Music Conference (pp. 277–277).

    Google Scholar 

  • Moon, B. (1997). Score following and real-time signal processing strategies in open-form compositions. Information Processing Society of Japan, SIG Notes, 97(122), 12–19.

    Google Scholar 

  • Morris, D., & Fiebrink, R. (2013). Using machine learning to support pedagogy in the arts. Personal and Ubiquitous Computing, 17(8), 1631–1635.

    Article  Google Scholar 

  • Resnick, M., Myers, B., Nakakoji, K., Shneiderman, B., Pausch, R., Selker, T., et al. (2005). Design principles for tools to support creative thinking. In Report of Workshop on Creativity Support Tools. Washington, DC, USA.

    Google Scholar 

  • Rittel, H. W. (1972). On the planning crisis: Systems analysis of the ‘first and second generations’. Bedriftsøkonomen, 8, 390–396.

    Google Scholar 

  • Rowe, R., Garton, B., Desain, P., Honing, H., Dannenberg, R., Jacobs, D., et al. (1993). Editor’s notes: Putting Max in perspective. Computer Music Journal, 17(2), 3–11.

    Article  Google Scholar 

  • Schnell, N., & Battier, M. (2002). Introducing composed instruments, technical and musicological implications. In Proceedings of the International Conference on New Interfaces for Musical Expression.

    Google Scholar 

  • Sonami, L. (2016). Lecture on machine learning, within online class “Machine Learning for Musicians and Artists” by R. Fiebrink, produced by Kadenze, Inc.

    Google Scholar 

  • Wanderley, M. M., & Depalle, P. (2004). Gestural control of sound synthesis. Proceedings of the IEEE, 92(4), 632–644.

    Article  Google Scholar 

  • Wessel, D. (2006). An enactive approach to computer music performance. In Y. Orlarey (Ed.), Le Feedback dans la Creation Musical (pp. 93–98). Lyon, France: Studio Gramme.

    Google Scholar 

  • Wright, M., & Freed, A. (1997). Open Sound Control: A new protocol for communicating with sound synthesizers. In Proceedings of the International Computer Music Conference.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rebecca Fiebrink .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter

Fiebrink, R. (2017). Machine Learning as Meta-Instrument: Human-Machine Partnerships Shaping Expressive Instrumental Creation. In: Bovermann, T., de Campo, A., Egermann, H., Hardjowirogo, SI., Weinzierl, S. (eds) Musical Instruments in the 21st Century. Springer, Singapore. https://doi.org/10.1007/978-981-10-2951-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2951-6_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2950-9

  • Online ISBN: 978-981-10-2951-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics