Skip to main content

Stress Analysis Using Speech Signal

  • Conference paper
  • First Online:
International Conference on Innovative Computing and Communications

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 56))

Abstract

In the area of human–computer interaction, a number of methods have been developed. The recent popular theme is “emotional intelligence”. For research, our main target objective is to observe and analyze the effect of emotions on the performance of persons while performing the tasks. In this paper, we are imposing a new approach of stress detection and classification for students during the examination period. We used mel-frequency cepstral coefficients (MFCC) for feature extraction and support vector machine (SVM) classifier for better performance. In this system, three types of corpora have been tested and classified. Support vector machine combines with the rule-based approach with energy and fundamental frequency rules. Indian dataset is created by 50 students, including male and female both. Testing of corpus proved that native area, nationality, and living place have an effect on speech frequencies. At the end of result analysis, we can see that Indians’ normal speech frequency is nearly equal to the Mongolian’s angry frequency. And as per our target view, the results show that emotions affect the performance at an average rate of 20–30%. That is, if the person is with positive emotions, then his task will achieve 20–30% better result with high speed and opposite to this person with negative emotions will move towards the failure or will get a reduced rate in his performance about the task. The accuracy of the system achieved more than 90% for depressive stress and aggressive stress. The result proved that in the examination period, the performance of students increases in excited state and decreases in a depressive state.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Datcu D et al (2009) Multimodel recognition of emotions. Thesis, Datcu uuid:1867c5a9-8043-427e-bc6d-c7a40013c006

    Google Scholar 

  2. Bhakteyari K et al. Fuzzy model on human emotion recognition. Recent Adv Electr Comput Eng

    Google Scholar 

  3. Mikhail M. Real time emotion detection using EEG. Seminar The American University in Cairo-09

    Google Scholar 

  4. Shah R, Hewlett M (2007) Emotion detection from speech. Final projects cs 229 machine learning autumn

    Google Scholar 

  5. Lee J, Tashev I (2015) High-level feature representation using recurrent neural network for speech emotion recognition. In: Interspeech

    Google Scholar 

  6. Surace L et al (2017) Emotion recognition in the wild using deep neural networks and Bayesian classifiers, arXiv:1709.03820v1 [cs.CV]

  7. Lorenzo-Trueba et al (2018) Investigating different representations for modeling and controlling multiple emotions in DNN-based speech synthesis. Speech Commun. 99 (undefined)

    Google Scholar 

  8. Simantiraki O et al (2018) Stress detection from speech using spectral slop mesurement. In: Nuria et al (eds) Pervasive computing paradigms for mental health: selected papers from mind care

    Google Scholar 

  9. Torres-Boza D et al (2018) Hierarchical sparse coding framework for speech emotion recognition. Speech Commun 99:80–89 © 2018 Elsevier B.V. All rights reserved

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Principal of Sipna College of Engineering Amravati for perception testing for the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yogesh Gulhane .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gulhane, Y., Ladhake, S.A. (2019). Stress Analysis Using Speech Signal. In: Bhattacharyya, S., Hassanien, A., Gupta, D., Khanna, A., Pan, I. (eds) International Conference on Innovative Computing and Communications. Lecture Notes in Networks and Systems, vol 56. Springer, Singapore. https://doi.org/10.1007/978-981-13-2354-6_4

Download citation

Publish with us

Policies and ethics