Advertisement

Determining Spatial Motion Directly from Normal Flow Field: A Comprehensive Treatment

  • Tak-Wai Hui
  • Ronald Chung
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6468)

Abstract

Determining motion from a video of the imaged scene relative to the camera is important for various robotics tasks including visual control and autonomous navigation. The difficulty of the problem lies mainly in that the flow pattern directly observable in the video is generally not the full flow field induced by the motion, but only partial information of it, which is known as the normal flow field. A few methods collectively referred to as the direct methods have been proposed to determine the spatial motion from merely the normal flow field without ever interpolating the full flows. However, such methods generally have difficulty addressing the case of general motion. This work proposes a new direct method that uses two constraints: one related to the direction component of the normal flow field, and the other to the magnitude component, to determine motion. The first constraint presents itself as a system of linear inequalities to bind the motion parameters; the second one uses the rotation magnitude’s globality to all image positions to constrain the motion parameters further. A two-stage iterative process in a coarse-to-fine framework is used to exploit the two constraints. Experimental results on benchmark data show that the new treatment can tackle even the case of general motion.

Keywords

Motion Vector Motion Parameter Linear Inequality General Motion Spatial Motion 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Armangue, X., Araujo, H., Salvi, J.: A review on egomotion by means of differential epipolar geometry applied to the movement of a mobile robot. Patten Recognition 36(12), 2927–2944 (2003)CrossRefzbMATHGoogle Scholar
  2. 2.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Bruss, A.R., Horn, B.K.P.: Passive navigation. In: Int’l Conf. on Computer Vision Graphic and Imagining, vol. 21, pp. 3–20 (1983)Google Scholar
  4. 4.
    Chung, R., Yuan, D.: Direct determination of camera motion from normal flows. In: Int’l Conf. on Visualization, Imaging, and Image Processing, pp. 153–157 (2009)Google Scholar
  5. 5.
    Cipolla, R., Okamoto, Y., Kuno, Y.: Robust structure from motion using motion parallax. In: Int’l Conf. Computer Vision, pp. 374–382 (1993)Google Scholar
  6. 6.
    Farnebäck, G.: Polynomial expansion for orientation and motion estimation. PhD thesis, Dept. of Electrical Engineering, Linköping University (2002)Google Scholar
  7. 7.
    Fejes, S., Davis, L.S.: What can projections of flow fields tell us about the visual motion. In: Int’l Conf. on Computer Vision, pp. 979–986 (1998)Google Scholar
  8. 8.
    Fermüller, C., Aloimonos, Y.: Qualitative egomotion. IJCV 15, 7–29 (1995)CrossRefGoogle Scholar
  9. 9.
    Heeger, D.J., Jepson, A.D.: Subspace methods for recovering rigid motion I: algorithm and implementation. IJCV 7(2), 95–117 (1992)CrossRefGoogle Scholar
  10. 10.
    Horn, B.K.P., Shunck, B.G.: Determining optical Flow. AI 17, 185–203 (1981)Google Scholar
  11. 11.
    Horn, B.K.P., Weldon, E.J.: Direct methods for recovering motion. IJCV 2, 51–76 (1988)CrossRefGoogle Scholar
  12. 12.
    Lourakis, M.I.A.: Using constraint lines for estimating egomotion. In: Asian Conf. on Computer Vision, pp. 971–976 (2000)Google Scholar
  13. 13.
    Lourakis, M.I.A.: Egomotion estimation using quadruples of collinear image points. In: European Conf. on Computer Vision, pp. 834–848 (2000)Google Scholar
  14. 14.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Int’l Conf. on Computer Vision, pp. 933–938 (1981)Google Scholar
  15. 15.
    Papenberg, N., Bruhn, A., Brox, T., Didas, S., Weickert, J.: Highly accurate optic flow computation with theoretically justified warping. IJCV 67(2), 141–158 (2000)CrossRefGoogle Scholar
  16. 16.
    Raudies, F., Neumann, H.: An efficient linear method for the estimation of ego-motion from optical flow. In: Denzler, J., Notni, G., Süße, H. (eds.) DAGM. LNCS, vol. 5748, pp. 11–20. Springer, Heidelberg (2009)Google Scholar
  17. 17.
    Silva, C., Santos-Victor, J.: Robust egomotion estimation from the normal flow using search subspaces. IEEE Trans. PAMI 19(9), 1026–1034 (1997)CrossRefGoogle Scholar
  18. 18.
    Silva, C., Santos-Victor, J.: Egomotion estimation on a topological space. In: Int’l Conf. on Pattern Recognition, pp. 64–66 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Tak-Wai Hui
    • 1
  • Ronald Chung
    • 1
  1. 1.Department of Mech. and Automation EngineeringThe Chinese University of Hong KongHong Kong

Personalised recommendations