Acoustic and Perceptual-Auditory Determinants of Transmission of Speech and Music Information (in Regard to Semiotics)

  • Rodmonga PotapovaEmail author
  • Vsevolod Potapov
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 943)


The paper presents conception regarding speech and music research on the basis of semiotics. The speech and music semiotic systems are examined as special hierarchical subsystems of the common human semiotic interpersonal communication system. And what is more every speech semiotic subsystem has its own subsystems: e.g., articulatory, phonatory, acoustic, perceptual-auditory etc. The speech semiotic acoustic subsystem has its own subsystems, e.g. duration (tn – ms), intensity (In – dB), fundamental frequency (F0n – Hz), spectrum values (Fn – Hz). Speech and music are considered as two congeneric phenomena subsystems of the common semiotic interpersonal communication system with regard to semantics, syntactics and pragmatics: semantics as an area of relations between speech and music expressions, on the one hand, and objects and processes in the world, on the other hand; syntactics as an area of the interrelations of these expressions; pragmatics as an area of the influence of the meaning of these expressions on their users. This speech and music semiotic conception includes binary oppositions “ratio-emotio”, actual “thema-rhema” segmentation of speech and musical text/discourse, “segmental-suprasegmental items”, “prosody-timbre items”, etc. In this research was made an attempt to show the progressiveness of using the method of music “score” (composed on the basis of the results of speech prosodic characteristics analysis), which can help to determine the informativeness of parameters used for determining the speakers’ emotional state. Creation of music score based on this analysis is viewed as a model for a prosodic vocal outline of the utterance. At the same time the prosodic basis of speech and basic expressive means of the “music language” are connected. Speech as well as music uses the same space and time coordinates representing sound item movements. The height metric grid is based on this principle, which determines sound item in dynamics. Time organization of music and speech creates a common temporal basis. The music score created on the basis of the results of the prosodic features acoustic analysis meets the requirements taking into consideration the restrictions caused by the absence of segmented (sound-syllable) text. Thus, with the help of special music synthesis of speech utterance on the basis of the acoustic and further perceptual-auditory analysis it is possible to conduct an “analysis-synthesis” research.


Semiotics Common human semiotic system Subsystems Acoustics Articulation Phonation Auditory perception Music Speech and music information Speech and music synthesis 



This research is supported by the Russian Science Foundation, Project № 18-18-00477.


  1. 1.
    Aldoshina, I., Pritts, R.: Musical acoustics. Publishing House “Composer”, St.-Petersburg (2006). (in Russian)Google Scholar
  2. 2.
    Aranovsky, M.: Musical text: structure and features. Publishing house “Composer”, Moscow (1998). (in Russia)Google Scholar
  3. 3.
    Bonfeld, M.: Music: Language. Speech. Thinking. Experience of system research for musical arts. Publishing House “Composer”, St.-Petersburg (2006). (in Russian)Google Scholar
  4. 4.
    Crystal, D.: A Dictionary of Linguistics and Phonetics, 6th edn. Blackwell Publishing, Hoboken (2009)Google Scholar
  5. 5.
    Crystal, D.: Die Cambridge Enzyklopädie der Sprache. Campus Verlag, Frankfurt/New York (1998)Google Scholar
  6. 6.
    Eyges, K.: Essays on the philosophy of music. In: Spoken Meanings. Publishing House of St.-Petersburg University, St. Petersburg, pp. 175–222 (2007). (in Russian)Google Scholar
  7. 7.
    Linguistic Encyclopedic Dictionary/Yartseva, V.N. (ed.): Soviet Encyclopedia, Moscow (1990). (in Russian)Google Scholar
  8. 8.
    Music Encyclopedic Dictionary/Keldysh, G.V. (ed.): Soviet Encyclopedia, Moscow (1990). (in Russian)Google Scholar
  9. 9.
    Potapova, R.K.: On the possibility of research of identification of emotions (on the basis of method “analysis–synthesis–analysis”). In: Annual Conference of International Association for Forensic Phonetics (IAFP), Wiesbaden, 7–11 July 1996, p. 12 (1996)Google Scholar
  10. 10.
    Potapova, R.K.: Speech driving of robots: linguistics and modern automated systems, 2nd edn. Comkniga, Moscow (2005). (in Russian)Google Scholar
  11. 11.
    Potapova, R.K.: Semiotic heterogeneity of spoken speech communication. In: Neurocomputers: Development and Use. Publishing House “Radiotekhnika”, Moscow, no. 1, pp. 4–6 (2013). (in Russian)Google Scholar
  12. 12.
    Potapova, R.K.: Spoken language as a subsystem in the general semiotic system of interpersonal communication. Phonetics: Problems and Perspectives. MSLU, Moscow, pp. 189–196 (2014). (Moscow State Linguistic University Bulletin, № 1 (687). Series of Linguistics). (in Russian)Google Scholar
  13. 13.
    Potapova, R.: Speech dialog as a part of interactive “human-machine” systems. In: Ronzhin, A., Rigoll, G., Meshcheryakov, R. (eds.) ICR 2016. LNCS (LNAI), vol. 9812, pp. 208–218. Springer, Cham (2016). Scholar
  14. 14.
    Potapova, R.K., Potapov, V.V.: Kommunikative Sprechtätigkeit: Russland und Deutschland im Vergleich. Böhlau Verlag, Köln, Weimar, Wien (2011)CrossRefGoogle Scholar
  15. 15.
    Potapova, R.K., Potapov, V.V.: Language, speech, personality. Languages of the Slavic Culture, Moscow (2006). (in Russian)Google Scholar
  16. 16.
    Potapova, R.K., Potapov, V.V.: Speech communication: from sound to utterance. Languages of the Slavic Cultures, Moscow (2012). (in Russian)Google Scholar
  17. 17.
    Spencer, H.: The origin and function of music. In: Spoken Meanings. Publishing House of St. Petersburg University, St. Petersburg, p. 219 (2007). (in Russian)Google Scholar
  18. 18.
    The Encyclopedia of Music: Max Wade - Matthews & Wendy Tompson. Hermes House, London (2005)Google Scholar
  19. 19.
    Vargha, B., Dimeny, J., Loparits, E.: Language, music, mathematics. Publishing House “World”, Moscow (1981). (in Russian)Google Scholar
  20. 20.
    Veidle, V.: Spoken Meanings. New Journal, New York, pp. 103–137 (1973). (in Russian)Google Scholar
  21. 21.
    Veidle, V.: Spoken meanings. In: Spoken Meanings. Publishing House of St. Petersburg University, St. Petersburg, pp. 574–641 (2007). (in Russian)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Institute of Applied and Mathematical LinguisticsMoscow State Linguistic UniversityMoscowRussia
  2. 2.Faculty of PhilologyLomonosov Moscow State UniversityMoscowRussia

Personalised recommendations