Skip to main content

Augmented Tension Detection in Communication: Insights from Prosodic and Content Features

  • Conference paper
  • First Online:
Human-Computer Interaction. Multimodal and Natural Interaction (HCII 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12182))

Included in the following conference series:

  • 2293 Accesses

Abstract

Tension in communication often prevents effective flows of information among members in the conversation and thus can negatively influence practices in teams and learning efficiency at school. Interested in developing a computational technique that automatically detects tension in communication, we explore features that signal the existence of tension in human-human communications and investigate the potential of a supervised learning approach. While there is no tension-annotated dataset available, there are language resources that have distress annotated. Although tension may occur during the communication as a result of various factors, distress creates discomfort and tension. Leveraging an interview dataset that has marked the presence/absence of distress, we investigated the prosodic features and LIWC features that indicate tension. Specifically, we compare 23 prosodic features and LIWC features extracted from 186 interviews in terms of how effective they are to indicate the speaker’s distress in a one-to-one conversation. Our analysis shows that there are seven prosodic features and one LIWC features that differ between distress and non-distress interviews. The seven prosodic features are mean intensity, jitter, shimmer, longest silence duration, longest silence position, standard deviation of interviewee speaking rate, and hesitation. And the one effective LIWC feature is health.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Matthews, G.: Distress. In: Stress: Concepts, Cognition, Emotion, and Behavior, pp. 219–226. Academic Press (2016)

    Google Scholar 

  2. Ghosh, S., Chatterjee, M., Morency, L.P.: A multimodal context-based approach for distress assessment. In: Proceedings of the 16th International Conference on Multimodal Interaction, pp. 240–246. ACM, November 2014

    Google Scholar 

  3. Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., Quatieri, T.F.: A review of depression and suicide risk assessment using speech analysis. Speech Commun. 71, 10–49 (2015)

    Article  Google Scholar 

  4. Ma, X., Yang, H., Chen, Q., Huang, D., Wang, Y.: DepAudioNet: an efficient deep model for audio based depression classification. In: Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, pp. 35–42. ACM, October 2016

    Google Scholar 

  5. Rana, R., et al.: Automated screening for distress: a perspective for the future. Eur. J. Cancer Care 28(4), e13033 (2019)

    Article  Google Scholar 

  6. Cohn, J.F., Cummins, N., Epps, J., Goecke, R., Joshi, J., Scherer, S.: Multimodal assessment of depression from behavioral signals. In: The Handbook of Multimodal-Multisensor Interfaces, pp. 375–417. Association for Computing Machinery and Morgan & Claypool, October 2018

    Google Scholar 

  7. Gong, Y., Poellabauer, C.: Topic modeling based multi-modal depression detection. In: Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge, pp. 69–76. ACM, October 2017

    Google Scholar 

  8. Piwek, L., Pollick, F., Petrini, K.: Audiovisual integration of emotional signals from others’ social interactions. Front. Psychol. 6, 611 (2015)

    Google Scholar 

  9. Nöth, E., et al.: On the use of prosody in automatic dialogue understanding. Speech Commun. 36(1–2), 45–62 (2002)

    Article  MATH  Google Scholar 

  10. Batliner, A., Fischer, K., Huber, R., Spilker, J., Nöth, E.: How to find trouble in communication. Speech Commun. 40(1–2), 117–143 (2003)

    Article  MATH  Google Scholar 

  11. Kalathottukaren, R.T., Purdy, S.C., Ballard, E.: Behavioral measures to evaluate prosodic skills: a review of assessment tools for children and adults. Contemp. Issues Commun. Sci. Disord. 42, 138 (2015)

    Article  Google Scholar 

  12. Scherer, K.R.: Vocal markers of emotion: comparing induction and acting elicitation. Comput. Speech Lang. 27(1), 40–58 (2013)

    Article  Google Scholar 

  13. Öktem, A., Farrús, M., Wanner, L.: Prosograph: a tool for prosody visualisation of large speech corpora. In: Proceedings of the 18th Annual Conference of the International Speech Communication Association (INTERSPEECH 2017), ISCA 2017, Stockholm, Sweden, 20–24 August 2017, pp. 809–810 (2017)

    Google Scholar 

  14. Tackman, A.M., et al.: Depression, negative emotionality, and self-referential language: a multi-lab, multi-measure, and multi-language-task research synthesis. J. Pers. Soc. Psychol. 116(5), 817–834 (2019)

    Article  Google Scholar 

  15. Boersma, P., Van Heuven, V.: Speak and unSpeak with PRAAT. Glot Int. 5(9/10), 341–347 (2001)

    Google Scholar 

  16. Vioulès, M.J., Moulahi, B., Azé, J., Bringay, S.: Detection of suicide-related posts in Twitter data streams. IBM J. Res. Dev. 62(1), 7:1–7:12 (2018)

    Article  Google Scholar 

  17. Gratch, J., et al.: The distress analysis interview corpus of human and computer interviews. In: LREC, pp. 3123–3128, May 2014

    Google Scholar 

  18. Crystal, D.: On keeping one’s hedges in order. Engl. Today 4(3), 46–47 (1988)

    Article  Google Scholar 

  19. Islam, J., Xiao, L., Mercer, R., High, S.: Tension analysis in survivor interviews: a computational approach. In: 2019 Digital Humanities, Utrecht, The Netherlands, 9–12 July 2019 (2019). Accessed 19 Jan 2020

    Google Scholar 

  20. Sins, P.H.M., Karlgren, K.: Identifying and overcoming tension in interdisciplinary teamwork in professional development: two cases and a tool for support (2009)

    Google Scholar 

  21. Hawryluck, L.A., Espin, S.L., Garwood, K.C., Evans, C.A., Lingard, L.A.: Pulling together and pushing apart: tides of tension in the ICU team. Acad. Med. 77(10), S73–S76 (2002)

    Article  Google Scholar 

  22. Tartas, V., Mirza, N.M.: Rethinking collaborative learning through participation in an interdisciplinary research project: tensions and negotiations as key points in knowledge production. Integr. Psychol. Behav. Sci. 41(2) (2007). Article number: 154. https://doi.org/10.1007/s12124-007-9019-6

  23. Dooner, A.M., Mandzuk, D., Clifton, R.A.: Stages of collaboration and the realities of professional learning communities. Teach. Teach. Educ. 24(3), 564–574 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Bo Zhang or Lu Xiao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, B., Xiao, L. (2020). Augmented Tension Detection in Communication: Insights from Prosodic and Content Features. In: Kurosu, M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12182. Springer, Cham. https://doi.org/10.1007/978-3-030-49062-1_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49062-1_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49061-4

  • Online ISBN: 978-3-030-49062-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics