Advertisement

Accuracy Improvement for Depth from Small Irregular Camera Motions and Its Performance Evaluation

  • Syouta Tsukada
  • Yishin Ho
  • Norio TagawaEmail author
  • Kan Okubo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9164)

Abstract

We have proposed three-dimensional recovery methods using random camera rotations imitating involuntary eye movements of a human eyeball. Those methods are roughly classified into two types. One is a differential-type using temporal changes of image intensity, and is suitable for coarse textured images relative to the amplitude of the image motion. Another is an integral-type using image blur caused by the camera motions, and is proposed for fine textured images. In this study, we focus on the differential-type method. In this method, it is important that unsuited image pairs for the gradient equation should not be used for computing. We attempt to improve the accuracy by selecting suitable image pairs at each pixel and using only those to recover a depth map. Additionally, we evaluate the performance of the improved method by actually implementing the camera system which can capture images with performing small irregular rotations.

Keywords

Shape from motion Gradient method Random camera motions Fixational eye movements 

References

  1. 1.
    Lazaros, N., Sirakoulis, G.C., Gasteratos, A.: Review of stereo vision algorithm: from software to hardware. Int. J. Optomechatronics 5(4), 435–462 (2008)CrossRefGoogle Scholar
  2. 2.
    Martinez-Conde, S., Macknik, S.L., Hubel, D.: The role of fixational eye movements in visual perception. Nat. Rev. 5, 229–240 (2004)CrossRefGoogle Scholar
  3. 3.
    Tagawa, N.: Depth perception model based on fixational eye movements using bayesian statistical inference. In: ICPR 2010, pp. 1662–1665 (2010)Google Scholar
  4. 4.
    Tagawa, N., Alexandrova, T.: Computational model of depth perception based on fixational eye movements. In: VISAPP 2010, pp. 328–333 (2010)Google Scholar
  5. 5.
    Horn, B.K.P., Schunk, B.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)CrossRefGoogle Scholar
  6. 6.
    Simoncelli, E.P.: Bayesian multi-scale differential optical flow. In: Jähne, B., Haussecker, H., Geissler, P. (eds.) Handbook of Computer Vision and Applications, vol. 2, pp. 397–422. Academic Press, New York (1999)Google Scholar
  7. 7.
    Bruhn, A., Weickert, J.: Lucas/Kanade meets Horn/Schunk: combining local and global optic flow methods. Int. J. Comput. Vision 61(3), 211–231 (2005)CrossRefGoogle Scholar
  8. 8.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. PAMI 22(11), 1330–1334 (2000)CrossRefGoogle Scholar
  9. 9.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data. J. Roy. Statist. Soc. B 39, 1–38 (1977)zbMATHMathSciNetGoogle Scholar
  10. 10.
    Green, P.J.: On use of the Em algorithm for penalized likelihood estimation. J. Roy. Statist. Soc. B 52, 443–452 (1990)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Syouta Tsukada
    • 1
  • Yishin Ho
    • 1
  • Norio Tagawa
    • 1
    Email author
  • Kan Okubo
    • 1
  1. 1.Graduate School of System DesignTokyo Metropolitan UniversityHinoJapan

Personalised recommendations