A Method of Evaluating User Visual Attention to Moving Objects in Head Mounted Virtual Reality

  • Shi HuangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10918)


Virtual reality games/films/applications bring new challenges to conventional film grammar and design principles, due to more spatial freedom available to users in 6-DOF Head-Mounted Display (HMD). This paper introduces a simple model of viewers’ visual attention in environment of virtual reality while watching randomly generated moving objects. The model is based on a dataset collected from 10 users in a 50-seconds-long virtual reality experience on HTC Vive. In this paper, we considered three factors as major parameters affecting audiences’ attention: the distance between object and the viewer, the speed of objects movement, and the direction of object towards. We hope the research result is useful to immersive film directors and VR game designers in the future.


Virtual reality Focus of attention Immersive film VR game VR experience 


  1. 1.
    Batchelor, J.: Riccitiello: VR will be mainstream by 2020 (2016).
  2. 2.
    Vivarelli, N.: Virtual Reality Films Take Hold at Fall’s Film Festivals (2017).
  3. 3.
    Matyszczyk, C.: Steven Spielberg says VR is a ‘dangerous’ movie medium (2016).
  4. 4.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. In: Vaina, L.M. (ed.) Matters of Intelligence. SYLI, vol. 188, pp. 115–141. Springer, Dordrecht (1987). Scholar
  5. 5.
    Clark, J.J., Ferrier, N.J.: Modal control of an attentive vision system. In: IEEE International Conference on Computer Vision (1988)Google Scholar
  6. 6.
    Judd, T., Durand, F., Torralba, A.: A benchmark of computational models of saliency to predict human fixations. MIT Technical report (2012)Google Scholar
  7. 7.
    Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)CrossRefGoogle Scholar
  8. 8.
    Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a benchmark. IEEE Trans. Image Process. 24(12), 5706–5722 (2015)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Bogdanova, I., Bur, A., Hügli, H.: Visual attention on the sphere. IEEE Trans. Image Process. 17(11), 2000–2014 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Bogdanova, I., Bur, A., Hügli, H., Farine, P.-A.: Dynamic visual attention on the sphere. Comput. Vis. Image Underst. 114(1), 100–110 (2010)CrossRefGoogle Scholar
  11. 11.
    Upenik, E., Ebrahimi, T.: A simple method to obtain visual attention data in head mounted virtual reality. In: IEEE International Conference on Multimedia and Expo 2017, Hong-Kong (2017)Google Scholar
  12. 12.
    Thill, S.: Oculus Creative Director Saschka Unseld: “It Feels Like We’re in Film School Again” (2015).
  13. 13.
    Sheng, H., Liu, X.Y., Zhang, S.: Saliency analysis based on depth contrast increased. In: Proceedings of 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1347–1351. IEEE, Shanghai (2016).
  14. 14.
    Park, J., Oh, H., Lee, S., et al.: 3D visual discomfort predictor: analysis of disparity and neural activity statistics. IEEE Trans. Image Process. 24(3), 1101–1114 (2015). Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Animation and Digital Arts AcademyCommunication University of ChinaBeijingChina

Personalised recommendations