Advertisement

International Journal of Speech Technology

, Volume 22, Issue 3, pp 473–482 | Cite as

Emotions recognition: different sets of features and models

  • A. Revathi
  • C. JeyalakshmiEmail author
Article
  • 74 Downloads

Abstract

The better and effective human–machine communication is ensured by performing affective computing. In the recent years, healthy research has been progressing in recognizing emotions by using various databases. This paper mainly emphasizes the effectiveness on the basis of using different sets of features and modeling techniques in evaluating the performance of multiple speaker-independent and speaker-dependent emotion recognition systems. It has become a challenging task to improve the performance of emotion recognition system, since EMO-DB Berlin database used in this work contains only ten speeches uttered by ten speakers in different emotions namely, Anger, Boredom, Disgust, Fear, Happiness, Sadness and Neutral. Speaker dependent and independent emotion recognition is done by creating models using clustering technique, Gaussian mixture modeling (GMM) and continuous density hidden Markov modeling (CDHMM) techniques for all emotions. The emotion recognition system is also evaluated for mel frequency cepstrum (MFCC) and concatenated MFCC with probability & shifted delta cepstrum (SDC), mel frequency linear predictive cepstrum (MFPLPC) and concatenated MFPLPC with probability & SDC and formants for clustering used as a modeling technique. These features provide complementary evidence in assessing the performance of the system based on VQ based clustering technique. This algorithm provides 99 and 100% as overall weighted accuracy recall (WAR) for performance evaluation with respect to correct identification of emotion for any one feature and modeling technique.

Keywords

Emotion recognition system (ERS) Gaussian mixure model (GMM) Continuous density Hidden Markov Model (CDHMM) Mel frequency perceptual linear predictive cepstrum (MFPLPC) Shifted delta cepstrum (SDC) Weighted accuracy recall (WAR) 

Notes

Compliance with ethical standards

Conflict of interest

The authors have declared that no competing interest exists.

References

  1. Anagnostopoulos, C. N., Iliou, T., & Giannoukos, I. (2015). Features and classifiers for emotion recognition from speech: A survey from 2000 to 2011. Artificial Intelligence Review, 43, 155–177.CrossRefGoogle Scholar
  2. Hermansky, H., & Morgan, N. (1994). RASTA processing of speech. IEEE Transactions on Speech and Audio Processing, 2(4), 578–589.CrossRefGoogle Scholar
  3. Hermansky, H., Morgan, N., Bayya, A., & Kohn, P. (1991). The challenge of inverse E: The RASTA PLP method. Proceeding Twenty-fifth Asilomar conferene on signals, systems and computers (pp. 800–804) Pacific Grove, CA, IEEE. https://ieeexplore.ieee.org/document/186557/.
  4. Hermansky, H., Tsuga, K., Makino, S., & Wakita, H. (1986). Perceptually based processing in automatic speech recognition. Proceedings IEEE international conference on acoustics, speech and signal processing (pp. 1971–1974). https://ieeexplore.ieee.org/document/1168649/.
  5. Iliou, T., & Anagnostopoulos, C. N. (2009). Comparison of different classifiers for emotion recognition. Proceedings of 13th panhellenic conference on informatics (pp. 102–106).Google Scholar
  6. Jeyalakshmi, C., Revathi, A., & Venkataramani, Y. (2016). Integrated models and features based speaker independent emotion recognition. The International Journal of Telemedicine and Clinical Practices, 1(3), 271–291.CrossRefGoogle Scholar
  7. Jing, S., Mao, X., & Chen, L. (2018). Prominence features: Effective emotional features for speech emotion. Digital Signal Processing, 72, 216–231.CrossRefGoogle Scholar
  8. Kohler, M. A., & Kennedy, M. (2002). Language identification using shifted delta cepstra. IEEE 45th midwest symposium on circuits and systems (pp. 69–72). https://ieeexplore.ieee.org/document/1186972/.
  9. Lee, C. C., Mower, E., Busso, C., Lee, S., & Narayanan, S. (2011). Emotion recognition using a hierarchical binary decision tree approach. Speech Communication, 53, 1162–1171.CrossRefGoogle Scholar
  10. Morrison, D., Wang, R., & De Silva, L. C. (2007). Ensemble methods for spoken emotion recognition in call-centres. Speech Communication, 49, 98–112.CrossRefGoogle Scholar
  11. Murty, K. S. R., & Yegnanarayana, B. (2006). Combining evidence from residual phase and MFCC features for speaker recognition”. IEEE Signal Processing Letters, 13(1), 52–55.CrossRefGoogle Scholar
  12. Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models. Speech Communication, 41, 603–623.CrossRefGoogle Scholar
  13. Patel, P., Chaudhari, A., Kale, R., & Pund, M. (2017). Emotion recognition from speech with gaussian mixture models & via boosted GMM. International Journal of Research in Science & Engineering, 3(2), 47–53.Google Scholar
  14. Rabiner, L., & Juang, B. H. (1993). Fundamentals of speech recognition. NJ: Prentice Hall.Google Scholar
  15. Rao, K. S., Kumar, T. P., Anusha, K., Leela, B., Bhavana, I., & Gowtham, S.V.S.K. (2012). Emotion recognition from speech. International Journal of Computer Science and Information Technologies, 3(2), 3603–3607.Google Scholar
  16. Revathi, A., & Venkataramani, Y. (2011). Perceptual features based continuous speech recognition in additive noise environment using various modeling techniques. STM Journals on Current Trends in Signal Processing, 2(3), 1–15.Google Scholar
  17. Sapra, A., Panwar, N., & Panwar, S. (2013). Emotion recognition from speech. International Journal of Emerging Technology and Advanced Engineering, 3(2), 341–345.Google Scholar
  18. Shahin, I. (2009). Speaker identification in emotional environments. Iranian Journal of Electrical and Computer Engineering, Winter-Spring 2009, 8(1), 41–46.Google Scholar
  19. Shashidhar, G. K., Sharma, K., & Rao, K. S. (2012). Speaker recognition in emotional environment. Communications in Computer and Information Science, 305, 117–124.CrossRefGoogle Scholar
  20. Shinde, S., & Pande, S. (2012). A survey on: Emotion recognition with respect to database and various recognition techniques. International Journal of Computer Applications, 58(3), 9–12.CrossRefGoogle Scholar
  21. Vogt, T., & Andre, E. (2006). Improving automatic emotion recognition from speech via gender differentiation. In Proceedings of language resources and evaluation conference, 2006 (LREC 2006). https://www.informatik.uni-augsburg.de/lehrstuehle/hcm/publications/2006-LREC/lrec06.pdf.
  22. Wua, S., Falk, T. H., & Chan, W. Y. (2011). Automatic speech emotion recognition using modulation spectral features. Speech Communication, 53, 768–785.CrossRefGoogle Scholar
  23. Yogesh, C. K., Hariharan, M., Ngadiran, R., Adom, A. H., Yaacob, S., Berkai, C., et al. (2017). A new hybrid PSO assisted biogeography-based optimization for emotion and stress recognition from speech signal. Expert Systems with Applications, 69, 149–158.CrossRefGoogle Scholar
  24. Yu, D., & Tashev, I. (2014). Speech emotion recognition using deep neural network and extreme learning machine. INTERSPEECH (pp. 223–226). https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/IS140441.pdf.
  25. Zhang, Z., Coutinho, E., Deng, J., & Schuller, B. (2015). Cooperative learning and its application to emotion recognition from speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(1), 115–126.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of ECE/SEEESASTRA Deemed UniversityThanjavurIndia
  2. 2.K.Ramakrishnan College of EngineeringTrichyIndia

Personalised recommendations