Advertisement

Shape from Motion Blur Caused by Random Camera Rotations Imitating Fixational Eye Movements

  • Norio TagawaEmail author
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 458)

Abstract

Small involuntary vibrations of the human eyeball called “fixational eye movements” play a role in image analysis, such as for contrast enhancement and edge detection. This mechanism can be interpreted as stochastic resonance by biological processes, in particular, by neuron dynamics. We propose two algorithms that use the motion blur caused by many small random camera motions to recover the depth from a camera to a target object. The first is a two-step recovery method that detects the motion blur of an image and then analyzes it to determine the depth. The second method directly recovers the depth without explicitly detecting the motion blur, and it is expected to be highly accurate. From the view point of a computational optimality, in this study we evaluate the performance of the second method called direct method through numerical simulations using artificial images.

Keywords

Shape from motion blur Random camera rotation Fixational eye movements Stochastic resonance 

References

  1. 1.
    Oliver, C., Quegan, S.: Understanding Synthetic Aperture Radar Images. Artech House, London (1998)Google Scholar
  2. 2.
    Jazwinski, A.: Stochastic Processes and Filtering Theory. Academic Press, New York (1970)zbMATHGoogle Scholar
  3. 3.
    Prokopowicz, P.N., Cooper, P.R.: The dynamic retina: contrast and motion detection for active vision. Int. J. Comput. Vis. 16, 191–204 (1995)CrossRefGoogle Scholar
  4. 4.
    Hongler, M.-O., de Meneses, Y.L., Beyeler, A., Jacot, J.: The resonant retina: exploiting vibration noise to optimally detect edges in an image. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1051–1062 (2003)CrossRefGoogle Scholar
  5. 5.
    Gammaitoni, L., Hänggi, P., Jung, P., Marchesoni, F.: Stochastic resonance. Rev. Mod. Phys. 70, 223–252 (1998)CrossRefGoogle Scholar
  6. 6.
    Greenwood, P.E., Ward, L.M., Wefelmeyer, W.: Statistical analysis of stochastic resonance in a simple setting. Phys. Rev. E 60, 4687–4696 (1999)Google Scholar
  7. 7.
    Stemmler, M.: A single spike suffices: the simplest form of stochastic resonance in model neurons. Netw.: Comput. Neural Syst. 7, 687–716 (1996)CrossRefzbMATHGoogle Scholar
  8. 8.
    Martinez-Conde, S., Macknik, S.L., Hubel, D.H.: The role of fixational eye movements in visual perception. Nat. Rev. Neurosci. 5, 229–240 (2004)CrossRefGoogle Scholar
  9. 9.
    Tagawa, N.: Depth perception model based on fixational eye movements using Bayesian statistical inference. In: International Conference on Pattern Recognition, pp. 1662–1665 (2010)Google Scholar
  10. 10.
    Horn, B.K.P., Schunk, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)CrossRefGoogle Scholar
  11. 11.
    Simoncelli, E.P.: Bayesian multi-scale differential optical flow. In: Jähne, B., Haussecker, H., Geissler, P. (eds.) Handbook of Computer Vision and Applications, vol. 2, pp. 397–422. Academic Press, San Diego (1999)Google Scholar
  12. 12.
    Bruhn, A., Weickert, J.: Lucas/Kanade meets Horn/Schunk: combining local and global optic flow methods. Int. J. Comput. Vis. 61, 211–231 (2005)CrossRefGoogle Scholar
  13. 13.
    Sorel, M., Flusser, J.: Space-variant restoration of images degraded by camera motion blur. IEEE Trans. Image Process. 17, 105–116 (2008)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Paramanand, C., Rajagopalan, A.N.: Depth from motion and optical blur with an unscented Kalman filter. IEEE Trans. Image Process. 21, 2798–2811 (2012)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Tagawa, N., Kawaguchi, J., Naganuma, S., Okubo, K.: Direct 3-D shape recovery from image sequence based on multi-scale Bayesian network. In: International Conference on Pattern Recognition, pp. CD–ROM (2008)Google Scholar
  16. 16.
    Tagawa, N., Naganuma, S.: Structure and motion from image sequence based on multi-scale Bayesian network. In: Yin, P.-Y. (ed.) Pattern Recognition, pp. 73–98. In-Tech, Croatia (2009)Google Scholar
  17. 17.
    Poggio, T., Torre, V., Koch, C.: Computational vision and regularization theory. Nature 317, 314–319 (1985)CrossRefGoogle Scholar
  18. 18.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. Roy. Statist. Soc. B 39, 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Nayar, S.K., Nakagawa, Y.: Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Graduate School of System DesignTokyo Metropolitan UniversityHino-shiJapan

Personalised recommendations