Skip to main content

EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data

  • Conference paper
Book cover From Sounds to Music and Emotions (CMMR 2012)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 7900))

Included in the following conference series:

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system’s machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schwarz, D.: Data-driven Concatenative Sound Synthesis. Université Paris 6 – Pierre et Marie Curie. PhD thesis (2004)

    Google Scholar 

  2. Zils, A., Pachet, F.: Musical mosaicking. In: Proceedings of the COST G-6 Conference on Digital Audio Effects, Limerick, Ireland (December 2001)

    Google Scholar 

  3. Schwarz, D.: A System for Data-driven Concatenative Sound Synthesis. In: Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX 2000), Verona, Italy, pp. 97–102 (2000)

    Google Scholar 

  4. Bernardes, G., Peixoto de Pinho, N., Lourenço, S., Guedes, C., Pennycook, B., Oña, E.: The Creative Process Behind Dialogismos I: Theoretical and Technical Considerations. In: Proceedings of the ARTECH 2012 - 6th International Conference on Digital Arts, Faro, Portugal, pp. 2012–2016 (2012)

    Google Scholar 

  5. 5. Ricard, J.: Towards computational morphological description of sound. PhD Thesis, Universitat Pompeu Fabra, Barcelona, Spain (2004)

    Google Scholar 

  6. Schnell, N., Cifuentes, M., Lambert, J.P.: First Steps in Relaxed Real-time Typo-morphological Audio Analysis/Synthesis. In: Proceedings of the Sound and Music Computing Conference, Barcelona (2010)

    Google Scholar 

  7. Jehan, T.: Creating Music by Listening. Ph.D. Thesis, M.I.T., MA (2005)

    Google Scholar 

  8. Schwarz, D., Cahen, R., Britton, S.: Principles and Applications of Interactive Corpus-based Concatenative Synthesis. In: Journées d’Informatique Musicale (2008)

    Google Scholar 

  9. Dixon, S.: An interactive beat tracking and visualization system. In: Proceedings International Computer Music Conference (2001)

    Google Scholar 

  10. Schaeffer, P.: Traité des objets musicaux. Le Seuil, Paris (1966)

    Google Scholar 

  11. Smalley, D.: Spectro-morphology and Structuring Processes. In: Emmerson, S. (ed.) The Language of Electroacoustic Music, pp. 61–93. Macmillan, London (1986)

    Google Scholar 

  12. Thoresen, L., Hedman, A.: Spectromorphological Analysis of Sound Objects: An Adaptation of Pierre Schaeffer’s Typomorphology. Organised Sound 12, 129–141 (2007)

    Article  Google Scholar 

  13. Brent, W.: A Timbre Analysis and Classification Toolkit for Pure Data. In: Proceedings of the International Computer Music Conference (2010)

    Google Scholar 

  14. Frisson, C., Picard, C., Tardieu, D.: Audiogarden: Towards a Usable Tool for Composite Audio Creation. QPSR of the Numediart Research Program 3(2) (2010)

    Google Scholar 

  15. Sturm, B.: Adaptive Concatenative Sound Synthesis and Its Application to Micromontage Composition. Computer Music Journal 30(3), 46–66 (2006)

    Article  Google Scholar 

  16. Porres, A.T.: Dissonance Model Toolbox in Pure Data. In: Proceedings of the 4th Pure Data Convention, Weimar, Germany (2011)

    Google Scholar 

  17. Heyer, L., Kruglyak, S., Yooseph, S.: Exploring Expression Data: Identification and Analysis of Coexpressed Genes. Genome Research 9, 1106–1115 (1999)

    Article  Google Scholar 

  18. Ester, M., Kriegel, H., Sander, J., Xu, X.: A density-based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In: Proceedings of the Knowledge Discovery and Data Mining, pp. 226–231. AAAI Press (1996)

    Google Scholar 

  19. Kandogan, E.: Visualizing Multi-dimensional Clusters, Trends, and Outliers using Star Coordinates. In: Proceedings of the Knowledge and Data Mining (2001)

    Google Scholar 

  20. Wattenberg, M.: Arc Diagrams: Visualizing Structure in Strings. In: Proceedings of the IEEE Information Visualization Conference (2002)

    Google Scholar 

  21. Inselberg, A.: Parallel Coordinates: Visual Multidimensional Geometry and Its Applications. Springer (2009)

    Google Scholar 

  22. Barlow, C.: Two essays on theory. Computer Music Journal 11, 44–60 (1987)

    Article  Google Scholar 

  23. Bernardes, G., Guedes, C., Pennycook, B.: Style Emulation of Drum Patterns by Means of Evolutionary Methods and Statistical Analysis. In: Proceedings of the Sound and Music Computing Conference, Barcelona, Spain (2010)

    Google Scholar 

  24. Sioros, G., Guedes, C.: Automatic Rhythmic Performance in Max/MSP: the kin.rhythmicator. In: Proceedings of the International Conference on New Interfaces for Musical Expression, Oslo, Norway (2011)

    Google Scholar 

  25. SoundHack Plugins Bundle, http://soundhack.henfast.com/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bernardes, G., Guedes, C., Pennycook, B. (2013). EarGram: An Application for Interactive Exploration of Concatenative Sound Synthesis in Pure Data. In: Aramaki, M., Barthet, M., Kronland-Martinet, R., Ystad, S. (eds) From Sounds to Music and Emotions. CMMR 2012. Lecture Notes in Computer Science, vol 7900. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41248-6_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41248-6_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41247-9

  • Online ISBN: 978-3-642-41248-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics