Advertisement

Automatic Low-Level Overlays on Presentations to Support Regaining an Audience’s Attention

  • Walter Ritter
  • Guido Kempter
  • Isabella Hämmerle
  • Andreas Wohlgenannt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10901)

Abstract

In a world full of distractions, keeping an audience focused on a presentation is getting increasingly difficult. In this paper, we propose a system that supports presenters in a nearly subliminal way to regain attention of the overall audience. The system uses a measure of motion complexity inside the audience area as an estimate for overall attention. It then applies low-level visual overlays over presentations if the estimated level of attention is getting too low. Ideally, these dynamically adapted visual overlays can be detected in the peripheral field of view but not in the foveal field of view. In a pilot study with 14 participants, we tested the feasibility of this approach with a simplified version of the system, limiting stimuli to red colored overlays up to an opacity of 20%. First results show that motion complexity can indeed be a good indicator of distractions and low-level visual overlays can lead to a higher perceived level of agitation. However, the visual effects used in this pilot study have been partly perceived by the audience. Further work is needed to identify visual stimuli that are best fitted for recapturing attention without irritating those already focused on the presentation.

Keywords

Presentation support Subliminal Adaptive systems Visual overlays 

Notes

Acknowledgements

Financial support for this project was provided by the Austrian research funding association (FFG) under the scope of the COMET program within the research project “Easy to use professional business and system control applications (LiTech)” (contract # 843535). This program is promoted by BMVIT, BMWFJ and the federal state of Vorarlberg.

References

  1. Ayvaz, U., Gürüler, H.: Real-time detection of students’ emotional states in the classroom. In: 25th Signal Processing and Communications Applications Conference (SIU), pp. 1–4. IEEE, Antalya (2017)Google Scholar
  2. Bahreini, K., Nadolski, R., Westera, W.: Towards multimodal emotion recognition in e-learning environments. Interact. Learn. Environ. 24(3) (2016)CrossRefGoogle Scholar
  3. Bradbury, N.A.: Attention span during lectures: 8 seconds, 10 minutes, or more? Adv. Physiol. Educ. 40, 509–513 (2016)CrossRefGoogle Scholar
  4. Bunce, D.M., Flens, E.A., Neiles, K.Y.: How long can students pay attention in class? a study of student attention decline using clickers. J. Chem. Educ. 87(12), 1438–1443 (2010)CrossRefGoogle Scholar
  5. Curcio, C.A., Sloan, K.R., Kalina, R.E., Hendrickson, A.E.: Human photoreceptor topography. J. Comp. Neurol. 292, 497–523 (1990)CrossRefGoogle Scholar
  6. Daouas, T., Lejmi, H.: Emotions recognition in an intelligent elearning environment. Interact. Learn. Environ. (2018).  https://doi.org/10.1080/10494820.2018.1427114CrossRefGoogle Scholar
  7. Guggisberg, A.G., Mathis, J., Schnider, A., Hess, C.W.: Why do we yawn? the importance of evidence for specific yawn-induced effects. Neurosci. Biobehav. Rev. 35(5), 1302–1304 (2011)CrossRefGoogle Scholar
  8. Hadjikhani, N., Tootell, R.B.: Projection of rods and cones within human visual cortex. Hum. Brain Mapp. 9(1), 55–63 (2000)CrossRefGoogle Scholar
  9. Hartley, J., Davies, I.K.: Note taking: a critical review. Program. Learn. Educ. Technol. 15, 207–224 (1978)CrossRefGoogle Scholar
  10. Helmke, A., Renkl, A.: Das Muenchener Aufmerksamkeitsinventar (MAI): Ein Instrument zur systematischen Verhaltensbeobachtung der Schueleraufmerksamkeit im Unterricht. Diagnostica 38(2), 130–141 (1992)Google Scholar
  11. Hommel, M.: Kodierhandbuch des Beobachtungsinventars zur systematischen und videobasierten Erfassung der Aufmerksamkeit von Lernenden (m/w): modifiziertes Aufmerksamkeitsinventar (ModAI). In: Dresdner Beiträge zur Wirtschaftspädagogik. - Dresden : Technische Universität, vol. 2012, p. 1 (2012). ISSN 0945-4845, ZDB-ID 21890250Google Scholar
  12. Hyejin, K.: Learner’s intelligent emotion detection system in U-learning environment. Int. J. u- and e-Serv. Sci. Technol. 10(8), 91–98 (2017)CrossRefGoogle Scholar
  13. Jonas, J.B., Schneider, U., Naumann, G.O.: Count and density of human retinal photoreceptors. Graefes Arch. Clin. Exp. Ophthalmol. 230(6), 505–510 (1992)CrossRefGoogle Scholar
  14. Kempter, G., Weidmann, K.H., Roux, P.: What are the benefits of analogous communication in human computer interaction? In: Stephanidis, C. (ed.) Universal Access in HCI: Inclusive Design in the Information Society, pp. 1427–1431. Lawrence Erlbaum Associates, Mahwah (2003)Google Scholar
  15. Kramer, A.D.I., Guillory, J.E., Hancock, J.T.: Emotional contagion through social networks. Proc. Nat. Acad. Sci. 111 (24), 8788–8790 (2014)Google Scholar
  16. Krithika, L.B., Lakshmi, P.G.G.: Student emotion recognition system (SERS) for e-learning improvement based on learner concentration metric. Proc. Comput. Sci. 85, 767–776 (2016)CrossRefGoogle Scholar
  17. Lorenz, K.: Der Kumpan in der Umwelt des Vogels – Der Artgenosse als auslösendes Moment sozialer Verhaltensweisen. J. für Ornithologie 83, 137–213 (1935)CrossRefGoogle Scholar
  18. Magdin, M., Turčáni, M., Hudec, L.: Evaluating the Emotional State of a User Using a Webcam. Int. J. Interact. Multimed. Artif. Intell. 4(1), 61–68 (2016)Google Scholar
  19. Magdin, M., Prikler, F.: Real time facial expression recognition using Webcam and SDK affectiva. Int. J. Multimed. Artif. Intell. (2018, in Press)Google Scholar
  20. McKeachie, W.J., Svinicki, M.: McKeachie’s Teaching Tips: Strategies, Research, and Theory for College and University Teachers, 14th edn. Wadsworth Publishing, Belmont (2013)Google Scholar
  21. Nedji Milat, I., Seridi, H., Sellami, M.: Towards an intelligent emotional detection in an e-learning environment. In: Woolf, Beverley P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 712–714. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-69132-7_86CrossRefGoogle Scholar
  22. Pratto, F., Oliver, J.: Automatic vigilance: the attention-grabbing power of negative social information. J. Pers. Soc. Psychol. 61(3), 380–391 (1991)CrossRefGoogle Scholar
  23. Ritter, W.: Benefits of subliminal feedback loops in human-computer interaction. Adv. Hum.-Comput. Interact. 2011, Article ID 346492 (2011)Google Scholar
  24. Smith, N.K., Cacioppo, J.T., Larsen, J.T., Chartrand, T.L.: May I have your attention, please: electrocortical responses to positive and negative stimuli. Neuropsychologia 41(2), 171–183 (2003)CrossRefGoogle Scholar
  25. Tudor, A.D., Poeschl, S., Doering, N.: What do audiences do when they sit and listen? Stud. Health Technol. Inform. 191, 120–124 (2013)Google Scholar
  26. Ward, A.F., Duke, K., Gneezy, A., Bos, M.W.: Brain drain: the mere presence of one’s own smartphone reduces available cognitive capacity. J. Assoc. Consum. Res. 2(4), 140–154 (2017)CrossRefGoogle Scholar
  27. Ward, D.J., Blackwell, A.F., MacKay, D.J.C.: Dasher—a data entry interface using continuous gestures and language models. In: Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST 2000), pp. 129–137. ACM, New York (2000)Google Scholar
  28. Wells-Gray, E.M., Choi, S.S., Bries, A., Doble, N.: Variation in rod and cone density from the fovea to the mid-periphery in healthy human retinas using adaptive optics scanning laser ophthalmoscopy. Eye 30, 1135–1143 (2016)CrossRefGoogle Scholar
  29. Wilson, K., Korn, J.K.: Attention during lectures: beyond ten minutes. Teach. Psychol. 34(2), 85–89 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Walter Ritter
    • 1
  • Guido Kempter
    • 1
  • Isabella Hämmerle
    • 1
  • Andreas Wohlgenannt
    • 2
  1. 1.Vorarlberg University of Applied SciencesDornbirnAustria
  2. 2.WolfVision GmbHKlausAustria

Personalised recommendations