Abstract
Speech emotion recognition (SER) plays a vital role in natural interaction between humans and machines. However, due to the complexity of human emotions, the features learned in existing researches contain a large amount of redundant information that has nothing to do with emotions, which reduces the performance of SER. To alleviate the problem, in this paper we propose a novel model, named as Upgraded Attention-based Local Feature Learning Block (UA-LFLB). Concretely, the LFLB is used to extract deep local sequence features and as input to the UA mechanism to capture the salient features of the discourse level with contextual information. In doing this, more accurate and discriminative features can be learned, which greatly reduces redundant information in the features. To evaluate the feasibility of the proposed model, We conduct experiments on a widely used emotional database. Experimental results show that the proposed model outperforms the state-of-the-art methods on the IEMOCAP database and achieving 9% improvement in terms of average accuracy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Basu, S., Bag, A., Mahadevappa, M., Mukherjee, J., Guha, R.: Affect detection in normal groups with the help of biological markers. In: 2015 Annual IEEE India Conference (INDICON), pp. 1–6 (2015)
Busso, C., et al.: IEMOCAP: interactive emotional dyadic motion capture database. Lang. Resour. Eval. 42, 335–359 (2008)
Chen, L.F., Su, W., Feng, Y., Wu, M., She, J., Hirota, K.: Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction. Inf. Sci. 509, 150–163 (2020)
Chen, M., He, X., Yang, J., Zhang, H.: 3-d convolutional recurrent neural networks with attention model for speech emotion recognition. IEEE Sig. Process. Lett. 25(10), 1440–1444 (2018)
Han, J., Zhang, Z., Cummins, N., Schuller, B.: Adversarial training in affective computing and sentiment analysis: Recent advances and perspectives [review article]. IEEE Comput. Intell. Mag. 14, 68–81 (2019)
Huang, Z., Dong, M., Mao, Q., Zhan, Y.: Speech emotion recognition using CNN. In: MM 2014, pp. 801–804 (2014)
Landau, M.J.: Acoustical properties of speech as indicators of depression and suicidal risk. Vanderbilt Undergraduate Res. J. 4 (2008)
Li, Y., Baidoo, C., Cai, T., Kusi, G.A.: Speech emotion recognition using 1d cnn with no attention. In: International Computer Science and Engineering Conference (ICSEC), pp. 351–356 (2019)
Maaten, L.V.D., Hinton, G.E.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Mao, Q., Dong, M., Huang, Z., Zhan, Y.: Learning salient features for speech emotion recognition using convolutional neural networks. IEEE Trans. Multimedia 16(8), 2203–2213 (2014)
Meng, H., Yan, T., Yuan, F., Wei, H.: Speech emotion recognition from 3d log-mel spectrograms with deep learning network. IEEE Access 7, 125868–125881 (2019)
Mirsamadi, S., Barsoum, E., Zhang, C.: Automatic speech emotion recognition using recurrent neural networks with local attention. In: ICASSP, pp. 2227–2231 (2017)
Mishra, S., Mandal, B., Puhan, N.B.: Multi-level dual-attention based CNN for macular optical coherence tomography classification. IEEE Sig. Process. Lett. 26, 1793–1797 (2019)
Sajjad, M., Kwon, S.: Clustering-based speech emotion recognition by incorporating learned features and deep BiLSTM. IEEE Access 8, 79861–79875 (2020)
Park, J.S., Kim, J., Oh, Y.: Feature vector classification based speech emotion recognition for service robots. IEEE Trans. Consum. Electron. 55, 1590–1596 (2009)
Schmidt, E.M., Kim, Y.E.: Learning emotion-based acoustic features with deep belief networks. In: IEEE WASPAA, pp. 65–68 (2011)
Swain, M., Routray, A., Kabisatpathy, P.: Databases, features and classifiers for speech emotion recognition: a review. Int. J. Speech Technol. 21(1), 93–120 (2018)
Xia, G., Li, F., Zhao, D.D., Zhang, Q., Yang, S.: Fi-net: a speech emotion recognition framework with feature integration and data augmentation. In: 2019 5th International Conference on Big Data Computing and Communications (BIGCOM), pp. 195–203 (2019)
Zeng, Z., Pantic, M., Roisman, G., Huang, T.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)
Zhang, Z., Cummins, N., Schuller, B.: Advanced data exploitation in speech analysis: an overview. IEEE Sig. Process. Mag. 34, 107–129 (2017)
Zhao, H., Xiao, Y., Han, J., Zhang, Z.: Compact convolutional recurrent neural networks via binarization for speech emotion recognition. In: 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2019, pp. 6690–6694 (2019)
Zhao, J., Mao, X., Chen, L.: Speech emotion recognition using deep 1D & 2D CNN LSTM networks. Biomed. Sign. Process. Control 47, 312–323 (2019)
Zhao, Z., Zheng, Y., Zhang, Z., Wang, H., Zhao, Y., Li, C.: Exploring spatio-temporal representations by integrating attention-based bidirectional-LSTM-RNNs and FCNs for speech emotion recognition. In: INTERSPEECH, pp. 272–276 (2018)
Zheng, W., Yu, J., Zou, Y.: An experimental study of speech emotion recognition based on deep convolutional neural networks. In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 827–831 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhao, H., Gao, Y., Xiao, Y. (2021). Upgraded Attention-Based Local Feature Learning Block for Speech Emotion Recognition. In: Karlapalem, K., et al. Advances in Knowledge Discovery and Data Mining. PAKDD 2021. Lecture Notes in Computer Science(), vol 12713. Springer, Cham. https://doi.org/10.1007/978-3-030-75765-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-75765-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-75764-9
Online ISBN: 978-3-030-75765-6
eBook Packages: Computer ScienceComputer Science (R0)