Classification of Phonemes Using EEG

  • R. Aiswarya Priyanka
  • G. Sudha SadasivamEmail author
Conference paper


Artificial speech synthesis can be done using electroencephalography (EEG) and electrocorticography (ECoG) for the brain–computer interface (BCI). This paper focuses on using EEG to classify phonological categories. Although literature is available on the identification and classification of phoneme information in the electroencephalography signals, the classification accuracy of some phonological categories is high, while that of others is too low. Thus, this chapter focuses on identifying the correlation between imagined EEG and audio signals to select the appropriate EEG features. It also identifies the EEG channels that are best suited for imagined speech. Once features are selected, phonemes are classified as vowels and consonants using a support vector machine. Experimental results suggest good accuracy when using 49 features that correlated with audio signals.


BCI ECoG EEG Phonology Speech synthesis Though to speech conversion Phoneme extraction Machine learning Deep learning 


  1. 1.
    Blakely T, Miller KJ, Rao RPN, Holmes MD, Ojemann JG (2008) Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids. In: Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th annual international conference of the IEEE, pp 4964–4967CrossRefGoogle Scholar
  2. 2.
    Kellis S, Miller K, Thomson K, Brown R, House P, Greger B (2010) Decoding spoken words using local field potentials recorded from the cortical surface. J Neu Eng 7(5):1–10Google Scholar
  3. 3.
    Bartels J, Andreasen D, Ehirim P, Mao H, Seibert S, Wright EJ, Kennedy P (2008) Neurotrophic electrode: method of assembly and implantation into human motor speech cortex. J Neurosci Method 174(2):168–176CrossRefGoogle Scholar
  4. 4.
    Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, Crone NE, Knight RT, Chang EF (2012) Reconstructing speech from human auditory cortex. PLoS ONE 10(1):1–13Google Scholar
  5. 5.
    Suppes P, Lu Z-L, Han B (1997) Brain wave recognition of words. Proc Nat Acad Sci 94(26):14965–14969CrossRefGoogle Scholar
  6. 6.
    Porbadnigk A, Wester M, Calliess J, Schultz T (2009) EEG-based speech recognition – impact of temporal effects. In: Encarnao P, Veloso A (eds) Biosignals. INSTICC Press, Setubal, pp 376–381Google Scholar
  7. 7.
    Brigham K, Kumar BVKV (2010) Imagined speech classification with EEG signals for silent communication: a preliminary investigation into synthetic telepathy. In: Bioinformatics and Biomedical Engineering (iCBBE), 2010 4th international conference on, June 2010, pp 1–4Google Scholar
  8. 8.
    D’Zmura M, Deng S, Lappas T, Thorpe S, Srinivasan R (2009) Toward EEG sensing of imagined speech. In: Jacko JA (ed) Human-computer interaction. New trends, Lecture notes in computer science, vol 5610. Springer, Berlin/Heidelberg, pp 40–48CrossRefGoogle Scholar
  9. 9.
    Callan DE, Callan AM, Honda K, Masaki S (2000) Single-sweep EEG analysis of neural processes underlying perception and production of vowels. Cogn Brain Res 10(1-2):173–176CrossRefGoogle Scholar
  10. 10.
    DaSalla CS, Kambara H, Sato M, Koike Y (2009) Single-trial classification of vowel speech imagery using common spatial patterns. Neural Net 22(9):1334–1339. Brain-Machine InterfaceCrossRefGoogle Scholar
  11. 11.
    Zhao S, Rudzicz F (2015) Classifying phonological categories in imagined and articulated speech. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, pp 992–996CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringPSG College of TechnologyCoimbatoreIndia

Personalised recommendations