Abstract
As an electronic musician I am largely occupied with capturing and manipulation of sound in real time—specifically the sound of instruments being played by other musicians. Also being a singer, I’ve found that both of my instruments are often perceived as “invisible”. This article discusses various strategies I developed, over a number of years, in order to “play” sound manipulations in musically reactive ways, to create a live sound-processing “instrument”. Problems were encountered in explaining what I do to other musicians, audience, and audio engineers about what I do, technically and musically. These difficulties caused me to develop specific ways to address the aesthetic issues of live sound-processing, and to better incorporate my body into performance, both of which ultimately helped alleviate the invisibility problem and make better music.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
I started using Max in 1992 (v2.0 Opcode) at a time when commercial/affordable versions of Max could not yet do live signal processing. Real-time signal processing using MSP was added in 1997. Max/MSP is now developed and maintained by Cycling ‘74.
- 2.
Many of the following ideas were presented as a workshop “Live sound-processing strategies” (at Harvestworks in New York, May 2012), and as a six-day intensive course “Aesthetics of Live Sound Processing” (at UniArts Sound Art Summer Academy in Helsinki, August 2014). The demonstration performances at the 2012 workshop with were with Robert Dick (flute) and Satoshi Takeishi (percussion).
- 3.
Invited by pianist Kathleen Supové to process her playing “Phrygian Gates” by John Adams. I used many of the techniques described in this article, including feedback, and discovered radical variations in my sound in each of ten performances we did in the same venue! The patches and processing were nearly identical in each case, were well rehearsed. Only the weather changed, (and the people in the audience) and therefore the way in which my feedback processes worked in that space.
- 4.
Pianist/composer Gordon Beeferman, personal communication 19 January, 2016.
- 5.
I have learned some patterns related to North Indian Classical Tala through self-study and private study with other musicians/collaborators, some quite accomplished, who were willing to help me find ways to use them in my work. These patterns, which I use in my live sound-processing work are merely reflections of these encounters and collaborations. For example of the patterns, see those listed at https://www.ancient-future.com/theka.html. In my programming, I assigned the various syllables, each to a particular preset in my patches and the result reflects the Tala-inspired patterns in the live sound processing.
- 6.
According to him, this happens with periods longer than 8 s between the single attacks.
- 7.
In an example of “periodicity pitch”—a 1 ms delay line with high feedback resonates at 1000 Hz, 2 ms at 500 Hz, 3 ms at 250 Hz. This pattern continues until around the pitch is out of hearing range, (sub audio) and it actually starts to sound like delay.
- 8.
Two notable exceptions are George Lewis, and Mari Kimura, whose performances from the stage as composer/performers and improvisers with computers were very inspiring to me at that time.
- 9.
What is it Like to be a Bat?, originated as a “digital punk” trio (with co-composer Kitty Brazelton, and drummer Danny Tunick) back in 1997. Computer music, live processing, electroacoustic sound “tectonic plates”, electric guitar, electric bass, drums and 2 multi-octave voices (Kitty and Dafna) (see What is it like to be a Bat? CD released 2003. http://www.tzadik.com/index.php?catalog=7707).
- 10.
Unfortunately, this, and perhaps some gender bias, resulted in some reviewers assuming that the audio engineer, or the drummer did all of the live electronics.
References
Brazelton, K. & Naphtali, D. (2003). “Sermonette- Ha!” (track 4). What is it like to be a bat? Tzadik Records, CD.
Four Criteria of Electronic Music (KONTAKTE), Part 1. 1972. Prod. Robert Slotover. Perf. Karlheinz Stockhausen. Allied Artists, London. UbuWeb Film & Video/Lecture 5. http://ubu.com/film/stockhausen_lectures5-1.html. Accessed June 1, 2016.
Moore, F. R. (1988). The Dysfunctions of MIDI. Computer Music Journal, 12(1), 19–28.
Naphtali, D., Dick, R., & Takeishi, S. (2012). Live sound processing strategies. https://youtu.be/x9hWMSzMdTI. Accessed June 1, 2016.
Rowe, R. (1993). Interactive Music Systems: Machine listening and composing. pp. 6–8. Cambridge, MA: The MIT Press.
Schaeffer, P. (2012). In Search of a Concrete Music. (Trans. C. North & J. Dack Trans.). pp. 8–9. Oakland, CA: U of California.
Smith, Julius O. (1985). Fundamentals of digital filter theory. Computer Music Journal, 9(3), 13–23.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Naphtali, D. (2017). What If Your Instrument Is Invisible?. In: Bovermann, T., de Campo, A., Egermann, H., Hardjowirogo, SI., Weinzierl, S. (eds) Musical Instruments in the 21st Century. Springer, Singapore. https://doi.org/10.1007/978-981-10-2951-6_26
Download citation
DOI: https://doi.org/10.1007/978-981-10-2951-6_26
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-2950-9
Online ISBN: 978-981-10-2951-6
eBook Packages: EngineeringEngineering (R0)