Advertisement

Prospective View on Sound Synthesis BCI Control in Light of Two Paradigms of Cognitive Neuroscience

  • Mitsuko Aramaki
  • Richard Kronland-Martinet
  • Sølvi Ystad
  • Jean-Arthur Micoulaud-Franchi
  • Jean Vion-Dury
Chapter

Abstract

Different trends and perspectives on sound synthesis control issues within a cognitive neuroscience framework are addressed in this article. Two approaches for sound synthesis based on the modelling of physical sources and on the modelling of perceptual effects involving the identification of invariant sound morphologies (linked to sound semiotics) are exposed. Depending on the chosen approach, we assume that the resulting synthesis models can fall under either one of the theoretical frameworks inspired by the representational-computational or enactive paradigms. In particular, a change of viewpoint on the epistemological position of the end-user from a third to a first person inherently involves different conceptualizations of the interaction between the listener and the sounding object. This differentiation also influences the design of the control strategy enabling an expert or an intuitive sound manipulation. Finally, as a perspective to this survey, explicit and implicit brain-computer interfaces (BCI) are described with respect to the previous theoretical frameworks, and a semiotic-based BCI aiming at increasing the intuitiveness of synthesis control processes is envisaged. These interfaces may open for new applications adapted to either handicapped or healthy subjects.

Keywords

Sound Source Perceptual Effect Brain Computer Interface Environmental Sound Synthesis Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Aramaki M, Besson M, Kronland-Martinet R, Ystad S (2009) Timbre perception of sounds from impacted materials: behavioral, electrophysiological and acoustic approaches. In: Ystad S, Kronland-Martinet R, Jensen K (eds) Computer music modeling and retrieval—genesis of meaning of sound and music, vol 5493., LNCSSpringer, Berlin, Heidelberg, pp 1–17CrossRefGoogle Scholar
  2. Aramaki M, Besson M, Kronland-Martinet R, Ystad S (2011) Controlling the perceived material in an impact sound synthesizer. IEEE Trans Audio Speech Lang Process 19(2):301–314CrossRefGoogle Scholar
  3. Aramaki M, Gondre C, Kronland-Martinet R, Voinier T, Ystad S (2010a) Imagine the sounds: an intuitive control of an impact sound synthesizer. In: Ystad S, Aramaki M, Kronland-Martinet R, Jensen K (eds) Auditory display, vol 5954., Lecture notes in computer scienceSpringer, Berlin, Heidelberg, pp 408–421CrossRefGoogle Scholar
  4. Aramaki M, Marie C, Kronland-Martinet R, Ystad S, Besson M (2010b) Sound categorization and conceptual priming for nonlinguistic and linguistic sounds. J Cogn Neurosci 22(11):2555–2569CrossRefGoogle Scholar
  5. Arfib D (1979) Digital synthesis of complex spectra by means of multiplication of non-linear distorted sine waves. J Audio Eng Soc 27:757–768Google Scholar
  6. Atal BS, Hanauer SL (1971) Speech analysis and synthesis by linear prediction of the speech wave. J Acoust Soc Am 50(2B):637–655CrossRefGoogle Scholar
  7. Avanzini F, Serafin S, Rocchesso D (2005) Interactive simulation of rigid body interaction with friction-induced sound generation. IEEE Trans Speech Audio Process 13(5):1073–1081CrossRefGoogle Scholar
  8. Bach-y-Rita P, Kercel W (2003) Sensory substitution and the human-machine interface. Trends in Cogn Sci 7:541–546CrossRefGoogle Scholar
  9. Ballas JA (1993) Common factors in the identification of an assortment of brief everyday sounds. J Exp Psychol Hum Percept Perform 19(2):250–267CrossRefGoogle Scholar
  10. Bensa J, Jensen K, Kronland-Martinet R (2004) A hybrid resynthesis model for hammer-strings interaction of piano tones. EURASIP J Appl Sig Process 7:1021–1035CrossRefGoogle Scholar
  11. Bilbao S (2009) Numerical sound synthesis: finite difference schemes and simulation in musical acoustics. Wiley, Chichester, UKCrossRefGoogle Scholar
  12. Castle PC, van Toller S, Milligan G (2000) The effect of odour priming on cortical EEG and visual ERP responses. Int J Psychophysiol 36:123–131CrossRefGoogle Scholar
  13. Chaigne A (1995) Trends and challenges in physical modeling of musical instruments, In: Proceedings of the international congress on acoustics’, Trondheim, NorwayGoogle Scholar
  14. Chowning J (1973) The synthesis of complex audio spectra by means of frequency modulation. J Audio Eng Soc 21:526–534Google Scholar
  15. Conan S, Aramaki M, Kronland-Martinet R, Thoret E, Ystad S (2012) Perceptual differences between sounds produced by different continuous interactions. Proceedings of the 11th Congrès Français d’Acoustique. Nantes, France, pp 409–414Google Scholar
  16. Conan S, Aramaki M, Kronland-Martinet R, Ystad S (2013) Post-proceedings 9th International Symposium on Computer Music Modeling and Retrieval (CMMR 2012). Lecture notes in computer science, vol 7900. Springer, Berlin, Heidelberg, chapter Intuitive Control of Rolling Sound SynthesisGoogle Scholar
  17. Conan S, Thoret E, Aramaki M, Derrien O, Gondre C, Kronland-Martinet R, Ystad S (2013) Navigating in a space of synthesized interaction-sounds: rubbing, scratching and rolling sounds. In: Proceedings of the 16th international conference on digital audio effects (DAFx-13), Maynooth, IrelandGoogle Scholar
  18. Cook PR (1992) A meta-wind-instrument physical model, and a meta-controller for real-time performance control. In: Proceedings of the international computer music conference, pp 273–276Google Scholar
  19. Daltrozzo J, Schön D (2009) Conceptual processing in music as revealed by N400 effects on words and musical targets. J Cogn Neurosci 21:1882–1892CrossRefGoogle Scholar
  20. de Saussure F (1955) Cours de linguistique générale. Payot, ParisGoogle Scholar
  21. Flanagan JL, Coker CH, Rabiner LR, Schafer RW, Umeda N (1970) Synthetic voices for computer. IEEE Spectr 7:22–45CrossRefGoogle Scholar
  22. Gaver WW (1993a) How do we hear in the world? Explorations of ecological acoustics. Ecol Psychol 5(4):285–313MathSciNetCrossRefGoogle Scholar
  23. Gaver WW (1993b) What in the world do we hear? An ecological approach to auditory source perception. Ecol Psychol 5(1):1–29MathSciNetCrossRefGoogle Scholar
  24. George L, Lécuyer A (2010) An overview of research on “passive” brain-computer interfaces for implicit human-computer interaction. In: International conference on applied bionics and biomechanics ICABB 2010—workshop W1 Brain-Computer Interfacing and Virtual Reality, Venezia, ItalyGoogle Scholar
  25. Gibson JJ (1986) The ecological approach to visual perception, Lawrence Erlbaum AssociatesGoogle Scholar
  26. Giordano BL, McAdams S (2006) Material identification of real impact sounds: effects of size variation in steel, wood, and plexiglass plates. J Acoust Soc Am 119(2):1171–1181CrossRefGoogle Scholar
  27. Gygi B, Kidd GR, Watson CS (2007) Similarity and categorization of environmental sounds. Percept Psychophys 69(6):839–855CrossRefGoogle Scholar
  28. Gygi B, Shafiro V (2007) General functions and specific applications of environmental sound research. Front Biosci 12:3152–3166CrossRefGoogle Scholar
  29. Hermes DJ (1998) Synthesis of the sounds produced by rolling balls. Internal IPO report no. 1226, IPO, Center for user-system interaction, Eindhoven, The NetherlandsGoogle Scholar
  30. Holcomb PJ, McPherson WB (1994) Event-related brain potentials reflect semantic priming in an object decision task. Brain and Cogn 24:259–276. http://forumnet.ircam.fr/product/modalys/?lang=en (n.d.). http://www-acroe.imag.fr/produits/logiciel/cordis/cordis_en.html (n.d.)
  31. Husserl E (1950) Idées directrices pour une phénoménologie, Gallimard. J New Music Res, special issue “enaction and music” (2009), 38(3), Taylor and Francis, UKGoogle Scholar
  32. Karjalainen M, Laine UK, Laakso T, Vilimtiki V (1991) Transmission-line modeling and real-time synthesis of string and wind instruments. In: I. C. M. Association (ed) Proceedings of the international computer music conference, Montreal, Canada, pp 293–296Google Scholar
  33. Koelsch S, Kasper E, Sammler D, Schulze K, Gunter T, Friederici A (2004) Music, language and meaning: brain signatures of semantic processing. Nat Neurosci 7(3):302–307CrossRefGoogle Scholar
  34. Kronland-Martinet R (1989) Digital subtractive synthesis of signals based on the analysis of natural sounds. A.R.C.A.M. (ed), Aix en ProvenceGoogle Scholar
  35. Kronland-Martinet R, Guillemain P, Ystad S (1997) Modelling of natural sounds by time-frequency and wavelet representations. Organ Sound 2(3):179–191CrossRefGoogle Scholar
  36. Kutas M, Hillyard SA (1980) Reading senseless sentences: brain potentials reflect semantic incongruity. Science 207:203–204CrossRefGoogle Scholar
  37. Lachaux JP, Rodriguez E, Martinerie J, Varela F (1999) Measuring phase synchrony in brain signals. Hum Brain Mapp 8:194–208CrossRefGoogle Scholar
  38. Lalande A (1926) Vocabulaire technique et critique de la philosophie, Edition actuelle, PUF n quadrige z 2002Google Scholar
  39. Le Brun M (1979) Digital waveshaping synthesis. J Audio Eng Soc 27:250–266Google Scholar
  40. Makhoul J (1975) Linear prediction, a tutorial review. In: Proceedings of the IEEE, vol 63. pp 561–580Google Scholar
  41. Matyja JR, Schiavio A (2013) Enactive music cognition: background and research themes. Constr Found 8(3):351–357. http://www.univie.ac.at/constructivism/journal/8/3/351.matyja
  42. McAdams S (1999) Perspectives on the contribution of timbre to musical structure. Comput Music J 23(3):85–102CrossRefGoogle Scholar
  43. McAdams S, Bigand E (1993) Thinking in sound: the cognitive psychology of human audition, Oxford University Press, OxfordGoogle Scholar
  44. Merer A, Ystad S, Kronland-Martinet R, Aramaki M (2011) Abstract sounds and their applications in audio and perception research. In: Ystad S, Aramaki M, Kronland-Martinet R, Jensen K (eds) Exploring music contents, vol 6684., Lecture notes in computer scienceSpringer, Berlin, Heidelberg, pp 176–187CrossRefGoogle Scholar
  45. Micoulaud-Franchi JA, Bat-Pitault F, Cermolacce M, Vion-Dury J (2011) Neurofeedback dans le trouble déficit de l’attention avec hyperactivité : de l’efficacité à la spécificité de l’effet neurophysiologique. Annales Médico-Psychologiques 169(3):200–208CrossRefGoogle Scholar
  46. Micoulaud-Franchi JA, Cermolacce M, Vion-Dury J, Naudin J (2013) Analyse critique et épistémologique du neurofeedback comme dispositif thérapeutique. le cas emblématique du trouble déficit de l’attention avec hyperactivité’, L’évolution psychiatriqueGoogle Scholar
  47. Micoulaud-Franchi J, Lanteaume L, Pallanca O, Vion-Dury J, Bartolomei F (2014) Biofeedback et épilepsie pharmacorésistante : le retour d’une thérapeutique ancienne ? Revue Neurologique 170(3):187–196CrossRefGoogle Scholar
  48. Nadeau R (1999) Vocabulaire technique et analytique de l’épistémologie, PUFGoogle Scholar
  49. Nijholt A (2009) BCI for games: a ‘state of the art’ survey. In: Stevens SM, Saldamarco SJ (eds) LNCS, vol 5309. Springer, Berlin, pp 225–228Google Scholar
  50. O’Brien JF, Shen C, Gatchalian CM (2002) Synthesizing sounds from rigid-body simulations. In: Press A (ed) The ACM SIGGRAPH 2002 symposium on computer animation, pp 175–181Google Scholar
  51. Orgs G, Lange K, Dombrowski J, Heil M (2006) Conceptual priming for environmental sounds and words: An ERP study. Brain Cogn 62(3):267–272CrossRefGoogle Scholar
  52. Pai DK, van den Doel K, James DL, Lang J, Lloyd JE, Richmond JL, Yau SM (2001) Scanning physical interaction behavior of 3D objects. In: Proceedings of SIGGRAPH 2001, computer graphics proceedings, annual conference series, pp 87–96Google Scholar
  53. Petitmengin C, Bitbol M, Nissou JM, Pachoud B, Curalucci H, Cermolacce M, Vion-Dury J (2009) Listening from within. J Conscious Stud 16:252–284Google Scholar
  54. Rabiner LR, Gold B (1975) Theory and application of digital signal processing. Prentice Hall, Englewood Cliffs, NJGoogle Scholar
  55. Rath M, Rocchesso D (2004) Informative sonic feedback for continuous human–machine interaction—controlling a sound model of a rolling ball. IEEE Multimedia Spec Interact Sonification 12(2):60–69CrossRefGoogle Scholar
  56. Risset JC (1965) Computer study of trumpet tones. J Acoust Soc Am 33:912CrossRefGoogle Scholar
  57. Roads C (1978) Automated granular synthesis of sound. Comput Music J 2(2):61–62CrossRefGoogle Scholar
  58. Rugg MD, Coles MGH (1995) Electrophysiology of mind. Event-related brain potentials and cognition, number 25. In: ‘Oxford Psychology’, Oxford University Press, chapter The ERP and Cognitive Psychology: Conceptual issues, pp 27–39Google Scholar
  59. Schaeffer P (1966) Traité des objets musicaux, du Seuil (ed)Google Scholar
  60. Schön D, Ystad S, Kronland-Martinet R, Besson M (2010) The evocative power of sounds: conceptual priming between words and nonverbal sounds. J Cogn Neurosci 22(5):1026–1035CrossRefGoogle Scholar
  61. Smith JO (1992) Physical modeling using digital waveguides. Comput Music J 16(4):74–87CrossRefGoogle Scholar
  62. Stoelinga C, Chaigne A (2007) Time-domain modeling and simulation of rolling objects. Acta Acustica united Acustica 93(2):290–304Google Scholar
  63. Thoret E, Aramaki M, Gondre C, Kronland-Martinet R, Ystad S (2013) Controlling a non linear friction model for evocative sound synthesis applications. In: Proceedings of the 16th international conference on digital audio effects (DAFx-13), Maynooth, IrelandGoogle Scholar
  64. Thoret E, Aramaki M, Kronland-Martinet R, Velay J, Ystad S (2014) From sound to shape: auditory perception of drawing movements. J Exp Psychol Hum Percept PerformGoogle Scholar
  65. Thoret E, Aramaki M, Kronland-Martinet R, Ystad S (2013) Post-proceedings 9th International Symposium on Computer Music Modeling and Retrieval (CMMR 2012), number 7900. In: Lecture notes in computer science, Springer, Berlin, Heidelberg, chapter reenacting sensorimotor features of drawing movements from friction soundsGoogle Scholar
  66. Väljamäe A, Steffert T, Holland S, Marimon X, Benitez R, Mealla S, Oliveira A, Jordà S (2013) A review of real-time EEG sonification research. In: Proceedings of the 19th international conference on auditory display (ICAD 2013), Lodz, Poland, pp 85–93Google Scholar
  67. van den Doel K, Kry PG, Pai DK (2001) Foleyautomatic: physically-based sound effects for interactive simulation and animation. In: Proceedings of SIGGRAPH 2001, computer graphics proceedings, annual conference series, pp 537–544Google Scholar
  68. Van Petten C, Rheinfelder H (1995) Conceptual relationships between spoken words and environmental sounds: event-related brain potential measures. Neuropsychologia 33(4):485–508CrossRefGoogle Scholar
  69. Vanderveer NJ (1979) Ecological acoustics: human perception of environmental sounds, PhD thesis, Georgia Inst. TechnolGoogle Scholar
  70. Varela F (1989) Invitation aux sciences cognitives. Seuil, ParisGoogle Scholar
  71. Varela F (1996) Neurophenomenology: a methodological remedy for the hard problem. J Conscious Stud 3:330–335Google Scholar
  72. Varela F, Thompson E, Rosch E (1991) The embodied mind: cognitive science and human experience. MIT Press, Cambridge, MA, USAGoogle Scholar
  73. Verron C, Aramaki M, Kronland-Martinet R, Pallone G (2010) A 3D immersive synthesizer for environmental sounds. IEEE Trans Audio Speech Lang Process 18(6):1550–1561CrossRefGoogle Scholar
  74. Verron C, Pallone G, Aramaki M, Kronland-Martinet R (2009) Controlling a spatialized environmental sound synthesizer. Proceedings of the IEEE workshop on applications of signal processing to audio and acoustics (WASPAA). New Paltz, NY, pp 321–324Google Scholar
  75. Viviani P (2002) Motor competence in the perception of dynamic events: a tutorial. In: Prinz W, Hommel B (eds) Common mechanisms in perception and action. Oxford University Press, New York, NY, pp 406–442Google Scholar
  76. Viviani P, Redolfi M, Baud-Bovy G (1997) Perceiving and tracking kinaesthetic stimuli: further evidence for motor-perceptual interactions. J Exp Psychol Hum Percept Perform 23:1232–1252CrossRefGoogle Scholar
  77. Viviani P, Stucchi N (1992) Biological movements look uniform: evidence of motor-perceptual interactions. J Exp Psychol Hum Percept Perform 18:603–623CrossRefGoogle Scholar
  78. Warren WH, Verbrugge RR (1984) Auditory perception of breaking and bouncing events: a case study in ecological acoustics. J Exp Psychol Hum Percept Perform 10(5):704–712CrossRefGoogle Scholar
  79. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM (2002) Brain-computer interfaces for communication and control. Clin Neuro physiol 113:767–791Google Scholar
  80. Ystad S, Voinier T (2001) A virtually-real flute. Comput Music J 25(2):13–24CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  • Mitsuko Aramaki
    • 1
  • Richard Kronland-Martinet
    • 1
  • Sølvi Ystad
    • 1
  • Jean-Arthur Micoulaud-Franchi
    • 2
  • Jean Vion-Dury
    • 2
  1. 1.Laboratoire de Mécanique et d’Acoustique (LMA), CNRS UPR 7051Aix-Marseille UniversityMarseille Cedex 20France
  2. 2.Laboratoire de Neurosciences Cognitives (LNC), CNRS UMR 7291Aix-Marseille UniversityMarseille Cedex 3France

Personalised recommendations