Image Blurring Method for Enhancing Digital Content Viewing Experience

  • Hiroaki YamauraEmail author
  • Masayuki TamuraEmail author
  • Satoshi NakamuraEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10901)


Many systems have been studied for enhancing a user’s interest in digital content by using HMD, 3DTV, and so on. However, for these systems to be able to enhance a user’s interest, the creator needs to elaborate the content. In this paper, we present a method that extends the experience of digital content by simply superimposing blurring effects that follow the gaze point of the user. To clarify how the experience of viewing still images and videos was enhanced by our method, we compared user impressions when viewing digital content with and without our method. We also examined physiological impressions such as visibility and discomfort. The experimental results showed that the participants’ impressions of the video content were changed by superimposing the blurring effect on the peripherally viewed area. In particular, almost all psychological impression items (immersion, stereoscopic effect and so on) were scored higher when the blur was superimposed than when it was not.


Viewing experience expansion Impression evaluation Gaze point Peripheral vision 



This work was supported in part by JST ACCEL Grant Number JPMJAC1602, Japan.


  1. 1.
    Morishima, S.: Dive into the movie -audience-driven immersive experience in the story. IEICE Trans. Inf. Syst. 91(6), 1594–1603 (2008). Journal on Virtual and Mixed Reality. (Japanese edition), Special Section on Human Communication IIICrossRefGoogle Scholar
  2. 2.
    Strasburger, H., Rentschler, I., Jüttner, M.: Peripheral vision and pattern recognition: a review. J. Vis. 11(5), 13 (2011)CrossRefGoogle Scholar
  3. 3.
    Okatani, T., Ishizawa, T., Deguchi, K.: Gaze-reactive image display for enhancing depth perception by depth-of field blur. IEICE Trans. Inf. Syst. 92(8), 1298–1307 (2009). (Japanese edition)Google Scholar
  4. 4.
    Hillaire, S., Lécuyer, A., Cozot, R., Casiez, G.: Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments. In: Virtual Reality Conference, pp. 47–50. IEEE, Nebraska (2008)Google Scholar
  5. 5.
    Mantiuk, R., Bazyluk, B., Tomaszewska, A.: Gaze-dependent depth-of-field effect rendering in virtual environments. In: Ma, M., Fradinho Oliveira, M., Madeiras Pereira, J. (eds.) SGDA 2011. LNCS, vol. 6944, pp. 1–12. Springer, Heidelberg (2011). Scholar
  6. 6.
    Hirai, T., Nakamura, S., Yumura, T., Morishima, S.: VRMixer mixing video and real world with video segmentation. In: 11th Advances in Computer Entertainment Technology Conference, pp. 1–7. ACM, Funchal, Madeira (2014)Google Scholar
  7. 7.
    Kagawa, K., Itou, J., Munemori, J.: Effects of an intuitional pictograph comment function in a video sharing web system. Int. J. Inf. Soc. 2(3), 94–99 (2010)Google Scholar
  8. 8.
    Hata, H., Koike, H., Sato, Y.: Visual guidance with unnoticed blur effect. In: Proceedings of the International Working Conference on Advanced Visual Interfaces, pp. 28–35. ACM, New York (2016)Google Scholar
  9. 9.
    Mauderer, M., Flatla, D.R., Nacenta, M.A.: Gaze-Contingent manipulation of color perception. In: SIGCHI Conference on Human Factors in Computing Systems, pp. 5191–5202. ACM, New York (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Meiji UniversityTokyoJapan

Personalised recommendations