Genophone: Evolving Sounds and Integral Performance Parameter Mappings
This project explores the application of evolutionary techniques to the design of novel sounds and their characteristics during performance. It is based on the “selective breeding” paradigm and as such dispensing with the need for detailed knowledge of the Sound Synthesis Techniques involved, in order to design sounds that are novel and of musical interest. This approach has been used successfully on several SSTs therefore validating it as an Adaptive Sound Meta-synthesis Technique. Additionally, mappings between the control and the parametric space are evolved as part of the sound setup. These mappings are used during performance.
KeywordsSelective Breeding Variable Mutation Finger Flex Recombination Operator Sound Change
Unable to display preview. Download preview PDF.
- 1.Choi, I., Bargar R., Goudeseune C,. “A manifold interface for a high dimensional control interface.” Proceedings of the ICM conference (Banff, Canada), 385–392. San Francisco CA, USA: International Computer Music Association, 1995.Google Scholar
- 2.Keane, D. & Gross, P., “The MIDI baton,” Proceedings International Computer Music Conference, Columbus, Ohio, USA. San Francisco CA, USA: International Computer Music Association, 1989.Google Scholar
- 4.Machover, T. & Chung, J., “Hyperinstruments: Musically intelligent and interactive performance and creativity systems,” Proceedings International Computer Music Conference, Columbus, Ohio, USA. San Fransisco CA, USA: International Computer Music Association, 1989.Google Scholar
- 5.Mandelis, J., “Genophone: An Evolutionary Approach to Sound Synthesis and Performance,” Proceedings ALMMA 2001: Artificial Life Models for Musical Applications Workshop, Prague, Czech Republic: Editoriale Bios, pp. 37–50, 2001. http://www.cogs.susx.ac.uk/users/jamesm/Papers/ECAL(2001)ALMMAMandelis.ps
- 6.Mandelis, J., “Adaptive Hyperinstruments: Applying Evolutionary Techniques to Sound Synthesis and Performance,” Proceedings NIME 2002: New Interfaces for Musical Expression, Dublin, Ireland, pp. 192–193, 2002. http://www.cogs.susx.ac.uk/users/jamesm/Papers/NIME(2002)Mandelis.pdf
- 7.Mulder, A.G.E., “Virtual Musical Instruments: Accessing the Sound Synthesis Universe as a Performer,” Proceedings of the First Brazilian Symposium on Computer Music, pp. 243–250, 1994.Google Scholar
- 8.Mulder, A.G.E. Fels, S.S. & Mase, K., “Empty-handed Gesture Analysis in Max/FTS,” Proceedings of the AIMI international workshop on Kansei-the technology of emotion, Antonio Camurri (ed.), pp 87–90, 1997.Google Scholar
- 9.Mulder, A.G.E. Fels, S.S. & Mase, K., “Mapping virtual object manipulation to sound variation,” IPSJ SIG notes Vol. 97, No. 122, 97-MUS-23 (USA/Japan intercollege computer music festival), pp. 63–68, 1997.Google Scholar
- 11.Rovan, J.B. Wanderley, M.M. Dubnov, S. & Depalle, P., “Instrumental Gestural Mapping Strategies as Expressivity Determinants in Computer Music Performance,” presented at “Kansei-The Technology of Emotion” workshop, 1997.Google Scholar
- 12.Wessel, D. and Wright, M. “Problems and Prospects for Intimate Musical Control of Computers,” ACM SIGCHI, CHI’ 01 Workshop New Interfaces for Musical Expression (NIME’01),2000.Google Scholar
- 13.Woolf, S., “Sound Gallery: An Interactive Artificial Life Artwork,” MSc Thesis, School of Cognitive and Computing Sciences, University of Sussex, UK, 1999.Google Scholar
- 14.Yee-King, M. (2000).,“AudioServe-an online system to evolve modular audio synthesis circuits,” MSc Thesis, School of Cognitive and Computing Sciences, University of Sussex, UK, 2000.Google Scholar