Abstract
We present a method for automatically summarising audio-visual recordings of academic presentations. For generation of presentation summaries, keywords are taken from automatically created transcripts of the spoken content. These are then augmented by incorporating classification output scores for speaker ratings, audience engagement, emphasised speech, and audience comprehension. Summaries are evaluated by performing eye-tracking of participants as they watch full presentations and automatically generated summaries of presentations. Additional questionnaire evaluation of eye-tracking participants is also reported. As part of these evaluations, we automatically generate heat maps and gaze plots from eye-tracking participants which provide further information of user interaction with the content. Automatically generated presentation summaries were found to hold the user’s attention and focus for longer than full presentations. Half of the evaluated summaries were found to be significantly more engaging than full presentations, while the other half were found to be somewhat more engaging.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Huiskes, M.J., Thomee, B., Lew, M.S.: New trends and ideas in visual concept detection: the MIR flickr retrieval evaluation initiative. In: Proceedings of the International Conference on Multimedia Information Retrieval, pp. 527–536. ACM (2010)
Chechik, G., Ie, E., Rehn, M., Bengio, S., Lyon, D.: Large-scale content-based audio retrieval from text queries. In: Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval, pp. 105–112. ACM (2008)
Lew, M.S., Sebe, N., Djeraba, C., Jain, R.: Content-based multimedia information retrieval: State of the art and challenges. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2, 1–19 (2006)
Curtis, K., Jones, G.J.F., Campbell, N.: Summarising academic presentations using linguistic and paralinguistic features. In: Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP, INSTICC, pp. 64–73. SciTePress (2018)
Ju, S.X., Black, M.J., Minneman, S., Kimber, D.: Summarization of videotaped presentations: automatic analysis of motion and gesture. IEEE Trans. Circuits Syst. Video Technol. 8, 686–696 (1998)
He, L., Sanocki, E., Gupta, A., Grudin, J.: Auto-summarization of audio-video presentations. In: Proceedings of the Seventh ACM International Conference on Multimedia (Part 1), pp. 489–498. ACM (1999)
Joho, H., Jose, J.M., Valenti, R., Sebe, N.: Exploiting facial expressions for affective video summarisation. In: Proceedings of the ACM International Conference on Image and Video Retrieval, p. 31. ACM (2009)
Pavel, A., Reed, C., Hartmann, B., Agrawala, M.: Video digests: a browsable, skimmable format for informational lecture videos. In: UIST, pp. 573–582 (2014)
Curtis, K., Jones, G.J.F., Campbell, N.: Effects of good speaking techniques on audience engagement. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 35–42. ACM (2015)
Curtis, K., Jones, G.J.F., Campbell, N.: Identification of emphasised regions in audio-visual presentations. In: Proceedings of the 4th European and 7th Nordic Symposium on Multimodal Communication (MMSYM 2016), Copenhagen, 29–30 September 2016, Number 141, pp. 37–42. Linköping University Electronic Press (2017)
Curtis, K., Jones, G.J.F., Campbell, N.: Speaker impact on audience comprehension for academic presentations. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 129–136. ACM (2016)
Curtis, K., Campbell, N., Jones, G.J.F.: Development of an annotated multimodal dataset for the investigation of classification and summarisation of presentations using high-level paralinguistic features. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA) (2018)
Rayner, K., Sereno, S.C.: Eye movements in reading: psycholinguistic studies. In: Handbook of Psycholinguistics, pp. 57–81 (1994)
Curtis, K., Jones, G.J.F., Campbell, N.: Utilising high-level features in summarisation of academic presentations. In: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pp. 315–321. ACM (2017)
Acknowledgments
This research is supported by Science Foundation Ireland through the CNGL Programme (Grant 12/CE/I2267) in the ADAPT Centre (www.adaptcentre.ie) at Dublin City University. The authors would like to thank all participants who took part in these evaluations. We would further like to express our gratitude to all participants who took part in previous experiments for classification of the high-level paralinguistic features discussed in this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Curtis, K., Campbell, N., Jones, G.J.F. (2019). Utilisation of Linguistic and Paralinguistic Features for Academic Presentation Summarisation. In: Bechmann, D., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2018. Communications in Computer and Information Science, vol 997. Springer, Cham. https://doi.org/10.1007/978-3-030-26756-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-26756-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-26755-1
Online ISBN: 978-3-030-26756-8
eBook Packages: Computer ScienceComputer Science (R0)