Skip to main content

Utilisation of Linguistic and Paralinguistic Features for Academic Presentation Summarisation

  • Conference paper
  • First Online:
Book cover Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018)

Abstract

We present a method for automatically summarising audio-visual recordings of academic presentations. For generation of presentation summaries, keywords are taken from automatically created transcripts of the spoken content. These are then augmented by incorporating classification output scores for speaker ratings, audience engagement, emphasised speech, and audience comprehension. Summaries are evaluated by performing eye-tracking of participants as they watch full presentations and automatically generated summaries of presentations. Additional questionnaire evaluation of eye-tracking participants is also reported. As part of these evaluations, we automatically generate heat maps and gaze plots from eye-tracking participants which provide further information of user interaction with the content. Automatically generated presentation summaries were found to hold the user’s attention and focus for longer than full presentations. Half of the evaluated summaries were found to be significantly more engaging than full presentations, while the other half were found to be somewhat more engaging.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://spokendata.com/.

References

  1. Huiskes, M.J., Thomee, B., Lew, M.S.: New trends and ideas in visual concept detection: the MIR flickr retrieval evaluation initiative. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 527–536. ACM (2010)

    Google Scholar 

  2. Chechik, G., Ie, E., Rehn, M., Bengio, S., Lyon, D.: Large-scale content-based audio retrieval from text queries. In: Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval, pp. 105–112. ACM (2008)

    Google Scholar 

  3. Lew, M.S., Sebe, N., Djeraba, C., Jain, R.: Content-based multimedia information retrieval: State of the art and challenges. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2, 1–19 (2006)

    Article  Google Scholar 

  4. Curtis, K., Jones, G.J.F., Campbell, N.: Summarising academic presentations using linguistic and paralinguistic features. In: Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP, INSTICC, pp. 64–73. SciTePress (2018)

    Google Scholar 

  5. Ju, S.X., Black, M.J., Minneman, S., Kimber, D.: Summarization of videotaped presentations: automatic analysis of motion and gesture. IEEE Trans. Circuits Syst. Video Technol. 8, 686–696 (1998)

    Article  Google Scholar 

  6. He, L., Sanocki, E., Gupta, A., Grudin, J.: Auto-summarization of audio-video presentations. In: Proceedings of the Seventh ACM International Conference on Multimedia (Part 1), pp. 489–498. ACM (1999)

    Google Scholar 

  7. Joho, H., Jose, J.M., Valenti, R., Sebe, N.: Exploiting facial expressions for affective video summarisation. In: Proceedings of the ACM International Conference on Image and Video Retrieval, p. 31. ACM (2009)

    Google Scholar 

  8. Pavel, A., Reed, C., Hartmann, B., Agrawala, M.: Video digests: a browsable, skimmable format for informational lecture videos. In: UIST, pp. 573–582 (2014)

    Google Scholar 

  9. Curtis, K., Jones, G.J.F., Campbell, N.: Effects of good speaking techniques on audience engagement. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 35–42. ACM (2015)

    Google Scholar 

  10. Curtis, K., Jones, G.J.F., Campbell, N.: Identification of emphasised regions in audio-visual presentations. In: Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Copenhagen, 29–30 September 2016, Number 141, pp. 37–42. Linköping University Electronic Press (2017)

    Google Scholar 

  11. Curtis, K., Jones, G.J.F., Campbell, N.: Speaker impact on audience comprehension for academic presentations. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 129–136. ACM (2016)

    Google Scholar 

  12. Curtis, K., Campbell, N., Jones, G.J.F.: Development of an annotated multimodal dataset for the investigation of classification and summarisation of presentations using high-level paralinguistic features. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA) (2018)

    Google Scholar 

  13. Rayner, K., Sereno, S.C.: Eye movements in reading: psycholinguistic studies. In: Handbook of Psycholinguistics, pp. 57–81 (1994)

    Google Scholar 

  14. Curtis, K., Jones, G.J.F., Campbell, N.: Utilising high-level features in summarisation of academic presentations. In: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pp. 315–321. ACM (2017)

    Google Scholar 

Download references

Acknowledgments

This research is supported by Science Foundation Ireland through the CNGL Programme (Grant 12/CE/I2267) in the ADAPT Centre (www.adaptcentre.ie) at Dublin City University. The authors would like to thank all participants who took part in these evaluations. We would further like to express our gratitude to all participants who took part in previous experiments for classification of the high-level paralinguistic features discussed in this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keith Curtis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Curtis, K., Campbell, N., Jones, G.J.F. (2019). Utilisation of Linguistic and Paralinguistic Features for Academic Presentation Summarisation. In: Bechmann, D., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2018. Communications in Computer and Information Science, vol 997. Springer, Cham. https://doi.org/10.1007/978-3-030-26756-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26756-8_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26755-1

  • Online ISBN: 978-3-030-26756-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics