Abstract
In the current study, a novel approach for speech emotion recognition is proposed and evaluated. The proposed method is based on multiple pairwise classifiers for each emotion pair resulting in dimensionality and emotion ambiguity reduction. The method was evaluated using the state-of-the-art English IEMOCAP corpus and showed significantly higher accuracy compared to a conventional method.
Dr. Panikos Heracleous is currently with Artificial Intelligence Research Center (AIRC), AIST, Japan.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Feng, H., Ueno, S., Kawahara, T.: End-to-end speech emotion recognition combined with acoustic-to-word ASR model. In: Proceedings of Interspeech, pp. 501–505 (2020)
Huang, J., Tao, J., Liu, B., Lian, Z.: Learning utterance-level representations with label smoothing for speech emotion recognition. In: Proceedings of Interspeech, pp. 4079–4083 (2020)
Jalal, M.A., Milner, R., Hain, T., Moore, R.K.: Removing bias with residual mixture of multi-view attention for speech emotion recognition. In: Proceedings of Interspeech, pp. 4084–4088 (2020)
Jalal, M.A., Milner, R., Hain, T.: Empirical interpretation of speech emotion perception with attention based model for speech emotion recognition. In: Proceedings of Interspeech, pp. 4113–4117 (2020)
Stuhlsatz, A., Meyer, C., Eyben, F., Zielke1, T., Meier, G., Schuller, B.: Deep neural networks for acoustic emotion recognition: raising the benchmarks. In: Proceedings of ICASSP, pp. 5688–5691 (2011)
Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Proceedings of Interspeech, pp. 2023–2027 (2014)
Lim, W., Jang, D., Lee, T.: Speech emotion recognition using convolutional and recurrent neural networks. In: Proceedings of Signal and Information Processing Association Annual Summit and Conference (APSIPA) (2016)
Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. J. Lang. Resour. Eval., pp. 335–359 (2008)
Kyperountas, M., Tefas, A., Pitas, I.: Pairwise facial expression classification. In: Proceedings of MMSP 2009, pp. 1–4 (2009)
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)
Cristianini, N., S.-Taylor, J.: Support Vector Machines. Cambridge University Press, Cambridge (2000)
Sahidullah, M., Saha, G.: Design, analysis and experimental evaluation of block based transformation in MFCC computation for speaker recognition. Speech Commun. 54(4), 543–565 (2012)
Bielefeld, B.: Language identification using shifted delta cepstrum. In: Fourteenth Annual Speech Research Symposium (1994)
Torres-Carrasquillo, P., Singer, E., Kohler, M.A., Greene, R.J., Reynolds, D.A., Deller Jr., J.R.: Approaches to language identification using gaussian mixture models and shifted delta cepstral features. In: Proceedings of ICSLP2002-INTERSPEECH 2002, pp. 16–20 (2002)
Dehak, N., Kenny, P.J., Dehak, R., Dumouchel, P., Ouellet, P.: Front-end factor analysis for speaker verification. IEEE Trans. Audio Speech Lang. Process. 19(4), 788–798 (2011)
Fukunaga, K.: Introduction to Statistical Pattern Recognition, 2nd edn. Academic Press, New York, ch. 10 (1990)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Heracleous, P., Mohammad, Y., Yoneyama, A. (2021). Speech Emotion Recognition Using Combined Multiple Pairwise Classifiers. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2021 - Late Breaking Posters. HCII 2021. Communications in Computer and Information Science, vol 1498. Springer, Cham. https://doi.org/10.1007/978-3-030-90176-9_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-90176-9_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90175-2
Online ISBN: 978-3-030-90176-9
eBook Packages: Computer ScienceComputer Science (R0)