Advertisement

Adaptive Artificial Neural Network Based Marathi Speech Database Emotion Recognition

  • Lalita Anil PalangeEmail author
  • Raviraj Vishwambhar Darekar
Conference paper

Abstract

Nowadays, recognition of emotion from the speech signal is the wide spreading research topic since the speech signal is the quickest and natural approach to communicate with humans. A number of investigations have been progressed related to this topic. With the knowledge of many investigated model, this paper intends to recognize the emotions from the speech signal in a precise manner. To accomplish this, we intend to propose an adaptive learning architecture for the artificial neural network to learn the multimodal fusion of speech features. It results in a hybrid PSO-FF algorithm, which combines the features of both the PSO and FF towards training the network. The performance of the proposed recognition model has been analyzed by comparing it with the conventional methods in correspondence with varied performance measures like Accuracy, Sensitivity, Specificity, Precision, FPR, FNR, NPV, FDR, F1Score and MCC. Finally, the experimental analysis revealed that the proposed modal is 10.85% better than the conventional modals with respect to the accuracy for both the Marathi database and Benchmark database.

Keywords

Emotions recognition Multimodal fusion Hybrid PSO-FF classifier 

References

  1. 1.
    Arias JP, Busso C, Yoma NB (2014) Shape-based modeling of the fundamental frequency contour for emotion detection in speech. Comput Speech Lang 28(1):278–294CrossRefGoogle Scholar
  2. 2.
    Bhatnagar K, Gupta S (2017) Extending the neural model to study the impact of effective area of optical fiber on laser intensity. Int J Intell Eng Sys 10:274–283Google Scholar
  3. 3.
    Cowie R, Cornelius RR (2003) Describing the emotional states that are expressed in speech. Speech Comm 40:5–32CrossRefGoogle Scholar
  4. 4.
    Deng J, Xu X, Zhang Z, Frühholz S, Schuller B (2017) Universum autoencoder-based domain adaptation for speech emotion recognition. IEEE Signal Process Lett 24(4):500–504CrossRefGoogle Scholar
  5. 5.
    Grimm M, Kroschel K, Mower E, Narayanan S (2007) Primitives-based evaluation and estimation of emotions in speech. Speech Comm 49(10–11):787–800CrossRefGoogle Scholar
  6. 6.
    Kavousi-Fard A (2017) A novel probabilistic method to model the uncertainty of tidal prediction. IEEE Trans Geosci Remote Sens 55(2):828–833CrossRefGoogle Scholar
  7. 7.
    Lopez-de-Ipiña K, Alonso JB, Solé-Casals J, Barroso N, Henriquez P, Faundez-Zanuy M et al (2013) On automatic diagnosis of Alzheimer’s disease based on spontaneous speech analysis and emotional temperature. Cogn Comput 7:44–55CrossRefGoogle Scholar
  8. 8.
    Rong J, Li G, Chen Y-PP (2009) Acoustic feature selection for automatic emotion recognition from speech. Inf Process Manag 45(3):315–328CrossRefGoogle Scholar
  9. 9.
    Shami M, Verhelst W (2007) An evaluation of the robustness of existing supervised machine learning approaches to the classification of emotions in speech. Speech Comm 49(3):201–212CrossRefGoogle Scholar
  10. 10.
    Yogesh CK, Hariharan M, Ngadiran R, Adom AH, Yaacob S, Berkai C, Polat K (2017b) A new hybrid PSO assisted biogeography-based optimization for emotion and stress recognition from speech signal. Expert Syst Appl 69(March):149–158Google Scholar
  11. 11.
    Zong Y, Zheng W, Cui Z, Li Q (2016) Double sparse learning model for speech emotion recognition. Electron Lett 52(16):1410–1412CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Lalita Anil Palange
    • 1
    Email author
  • Raviraj Vishwambhar Darekar
    • 2
  1. 1.SVERI’s College of Engineering PandharpurSolapurIndia
  2. 2.A. G. Patil Institute of TechnologySolapurIndia

Personalised recommendations