Skip to main content

Emotional Speech Recognition Using SMILE Features and Random Forest Tree

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1037))

Abstract

The recognition of emotional speech and its accurate representation is an exciting and challenging area of research in the field of speech and audio processing. The existing methods for representation of emotional speech don’t provide discriminating features for different emotions and there are many limitations as well. In this work, we propose to evaluate the openEAR toolkit features on publicly available datasets e.g. SAVEE Database. The low-level descriptors and their statistical functionals provide discriminating features for each emotion which provides state-of-the-art results for the given dataset. A random forest tree classifier model is trained in WEKA for classification. The accuracy obtained for SAVEE emotional database is 76.1%.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Batliner, A., Schuller, B., Seppi, D., Steidl, L., Devillers, S., Vidrascu, L., Vogt, T., Aharonson, V., Amir, N.: The automatic recognition of emotions in speech. In: Emotion-Oriented Systems. Springer, pp. 71–99 (2011)

    Google Scholar 

  2. Wang, S., et al.: Speech emotion recognition based on principal component analysis and back propagation neural network. In: 2010 International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 437–440 (2010)

    Google Scholar 

  3. Ververidis, D., Kotropoulos, C.: Fast and accurate sequential floating forward feature selection with the Bayes classifier applied to speech emotion recognition. Signal Process. 88(12), 2956–2970 (2008)

    Article  Google Scholar 

  4. Mao, X., Chen, L., Fu, L.: Multi-level speech emotion recognition based on HMM and ANN. In: 2009 WRI World Congress on Computer Science and Information Engineering, pp. 225–229 (2009)

    Google Scholar 

  5. Zhou, J., et al.: Speech emotion recognition based on rough set and SVM. In: International Conference on Machine Learning and Cybernetics, pp. 53–61 (2005)

    Google Scholar 

  6. Neiberg, D., Laskowski, K., Elenius, K.: Emotion recognition in spontaneous speech using GMMs. In: INTERSPEECH 2006- ICSLP, pp. 1–4 (2006)

    Google Scholar 

  7. Wu, C.H., Liang, W.B.: Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels. IEEE Trans. Affect. Comput. 2(1), 10–21 (2011)

    Article  Google Scholar 

  8. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  9. Ding, S., et al.: Extreme learning machine: algorithm, theory and applications. Artif. Intell. Rev. 44(1), 103–115 (2015)

    Article  Google Scholar 

  10. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)

    Article  Google Scholar 

  11. Han, K., Yu, D., Tashev, I.: Speech emotion recognition using deep neural network and extreme learning machine. In: Fifteenth Annual Conference of the International Speech Communication Association (2014)

    Google Scholar 

  12. Chan, K., Hao, J., Lee, T., Kwon, O.W.: Emotion recognition by speech signals. In: Proceedings of International Conference EUROSPEECH, Citeseer (2003)

    Google Scholar 

  13. Pervaiz, M., Amir, A.: Comparative study of features extraction for speech’s emotion at micro and macro level. In: International Conference on Communication, Computing and Digital Systems (C-CODE), IEEE (2017)

    Google Scholar 

  14. Aouani, H., Ayed, Y.B.: Emotion recognition in speech using MFCC with SVM, DSVM and auto-encoder. In: 2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), IEEE (2018)

    Google Scholar 

  15. Jackson, P., Haq, S.: Surrey audio-visual expressed emotion (savee) database. University of Surrey, Guildford (2014)

    Google Scholar 

  16. Latif, S., et al.: Cross Corpus Speech Emotion Classification-An Effective Transfer Learning Technique. arXiv preprint arXiv: 1801.06353 (2018)

    Google Scholar 

  17. Gideon, J., Khorram, S., Aldeneh, Z., Dimitriadis, D., Provost, E.M.: Progressive neural networks for transfer learning in emotion recognition, arXiv preprint arXiv: 1706.03256 (2017)

    Google Scholar 

  18. Yogesh, C.K., et al.: A new hybrid PSO assisted biogeography-based optimization for emotion and stress recognition from speech signal. Expert Syst. Appl. 69, 149–158 (2017)

    Article  Google Scholar 

  19. Glüge, S., Ronald, B., Thomas O.: Emotion recognition from speech using representation learning in extreme learning machines. In: 9th International Joint Conference on Computational Intelligence, Funchal, Madeira, Portugal, 1–3 November 2017, vol. 1. SciTePress (2017)

    Google Scholar 

  20. Martin, O., Kotsia, I., Macq, B., Pitas, I.: The ENTERFACE 2005 audio-visual emotion database. In: 22nd International Conference on Data Engineering Workshops (ICDEW 2006), pp. 1–8. IEEE (2006)

    Google Scholar 

  21. Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of German emotional speech. In Proceeding Interspeech, pp 1517–1520 (2005)

    Google Scholar 

  22. Steininger, S., Rabold, S., Dioubina, O., Schiel, F.: Development of the user-state conventions for the multimodal corpus in smartkom. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (2002)

    Google Scholar 

  23. Haq, S., Jackson, P.J., Edge, J.: Audio-visual feature selection and reduction for emotion classification. In: Proceeding International Conference on Auditory-Visual Speech Processing (AVSP 2008), Tangalooma, Australia (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fawad Hussain .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Butt, A.M., Bhatti, Y.K., Hussain, F. (2020). Emotional Speech Recognition Using SMILE Features and Random Forest Tree. In: Bi, Y., Bhatia, R., Kapoor, S. (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham. https://doi.org/10.1007/978-3-030-29516-5_2

Download citation

Publish with us

Policies and ethics