Skip to main content

Speech Emotion Recognition Using Combined Multiple Pairwise Classifiers

  • Conference paper
  • First Online:
HCI International 2021 - Late Breaking Posters (HCII 2021)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1498))

Included in the following conference series:

Abstract

In the current study, a novel approach for speech emotion recognition is proposed and evaluated. The proposed method is based on multiple pairwise classifiers for each emotion pair resulting in dimensionality and emotion ambiguity reduction. The method was evaluated using the state-of-the-art English IEMOCAP corpus and showed significantly higher accuracy compared to a conventional method.

Dr. Panikos Heracleous is currently with Artificial Intelligence Research Center (AIRC), AIST, Japan.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Feng, H., Ueno, S., Kawahara, T.: End-to-end speech emotion recognition combined with acoustic-to-word ASR model. In: Proceedings of Interspeech, pp. 501–505 (2020)

    Google Scholar 

  2. Huang, J., Tao, J., Liu, B., Lian, Z.: Learning utterance-level representations with label smoothing for speech emotion recognition. In: Proceedings of Interspeech, pp. 4079–4083 (2020)

    Google Scholar 

  3. Jalal, M.A., Milner, R., Hain, T., Moore, R.K.: Removing bias with residual mixture of multi-view attention for speech emotion recognition. In: Proceedings of Interspeech, pp. 4084–4088 (2020)

    Google Scholar 

  4. Jalal, M.A., Milner, R., Hain, T.: Empirical interpretation of speech emotion perception with attention based model for speech emotion recognition. In: Proceedings of Interspeech, pp. 4113–4117 (2020)

    Google Scholar 

  5. Stuhlsatz, A., Meyer, C., Eyben, F., Zielke1, T., Meier, G., Schuller, B.: Deep neural networks for acoustic emotion recognition: raising the benchmarks. In: Proceedings of ICASSP, pp. 5688–5691 (2011)

    Google Scholar 

  6. Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Proceedings of Interspeech, pp. 2023–2027 (2014)

    Google Scholar 

  7. Lim, W., Jang, D., Lee, T.: Speech emotion recognition using convolutional and recurrent neural networks. In: Proceedings of Signal and Information Processing Association Annual Summit and Conference (APSIPA) (2016)

    Google Scholar 

  8. Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. J. Lang. Resour. Eval., pp. 335–359 (2008)

    Google Scholar 

  9. Kyperountas, M., Tefas, A., Pitas, I.: Pairwise facial expression classification. In: Proceedings of MMSP 2009, pp. 1–4 (2009)

    Google Scholar 

  10. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)

    Google Scholar 

  11. Cristianini, N., S.-Taylor, J.: Support Vector Machines. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  12. Sahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition. Speech Commun. 54(4), 543–565 (2012)

    Article  Google Scholar 

  13. Bielefeld, B.: Language identification using shifted delta cepstrum. In: Fourteenth Annual Speech Research Symposium (1994)

    Google Scholar 

  14. Torres-Carrasquillo, P., Singer, E., Kohler, M.A., Greene, R.J., Reynolds, D.A., Deller Jr., J.R.: Approaches to language identification using gaussian mixture models and shifted delta cepstral features. In: Proceedings of ICSLP2002-INTERSPEECH 2002, pp. 16–20 (2002)

    Google Scholar 

  15. Dehak, N., Kenny, P.J., Dehak, R., Dumouchel, P., Ouellet, P.: Front-end factor analysis for speaker verification. IEEE Trans. Audio Speech Lang. Process. 19(4), 788–798 (2011)

    Article  Google Scholar 

  16. Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, New York, ch. 10 (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yasser Mohammad .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Heracleous, P., Mohammad, Y., Yoneyama, A. (2021). Speech Emotion Recognition Using Combined Multiple Pairwise Classifiers. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Late Breaking Posters. HCII 2021. Communications in Computer and Information Science, vol 1498. Springer, Cham. https://doi.org/10.1007/978-3-030-90176-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-90176-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-90175-2

  • Online ISBN: 978-3-030-90176-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics