Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 1, pp 641–659 | Cite as

Context-adaptive matching for optical flow

  • Yueran Zu
  • Wenzhong Tang
  • Xiuguo Bao
  • Yanyang Wang
  • Ke GaoEmail author
Article
  • 228 Downloads

Abstract

Modern sparse-to-dense optical flow estimation algorithms usually achieve state-of-art performance. Those algorithms need two steps: matching and interpolation. Matching is often unreliable for very large displacement optical flow due to illumination changes, deformations and occlusion etc. Moreover, conspicuous errors around motion discontinuities still keep serious as most methods consider edge only at interpolation step. The context-adaptive matching (CAM) is proposed for optical flow which is better at large displacement and edge preserving. The CAM is selective in feature extraction, adaptive in flow propagation and search radius adjusting. Selective features are proposed to consider edge preserving in matching step. Except for the usually used SIFT descriptor, the local directional pattern flow (LDPF) is introduced to keep more edge structure, and the oriented fast and rotated brief (ORB) is utilized to select out several most similar candidates. Unlike coarse-to-fine matching, which proposed a propagation step with only neighbors, we propose adaptive propagation to extend the matching candidates in order to improve the possibility of getting right correspondences. Furthermore, guided by prior knowledge and taking advantage of upper layers results, adaptive radius instead of constrained radius are proposed at finer layers. The CAM interpolated by EpicFlow is fast and robust for large displacements especially for fast moving objects and also preserves the edge structure well. Extensive experiments show that our algorithm is on par with the state-of-art optical flow methods on MPI-Sintel, KITTI and Middlebury.

Keywords

Optical flow PatchMatch Edge preserving Large displacement 

Notes

Acknowledgments

This work was supported by Beijing Municipal Science and Technology Commission Project Z171100000117010, the National Key Research and Development Plan (Nos. 2016YFB0801203, 2016YFB0801200), and National Nature Science Foundation of China (61271428).

References

  1. 1.
    Bailer C, Taetz B, Stricker D (2015) Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation. In: Proceedings of the IEEE international conference on computer vision, pp 4015–4023Google Scholar
  2. 2.
    Baker S, Scharstein D, Lewis J, Roth S, Black MJ, Szeliski R (2011) A database and evaluation methodology for optical flow. Int J Comput Vis 92(1):1–31CrossRefGoogle Scholar
  3. 3.
    Bao L, Yang Q, Jin H (2014) Fast edge-preserving patchmatch for large displacement optical flow. IEEE Trans Image Process 23(12):4996–5006MathSciNetCrossRefGoogle Scholar
  4. 4.
    Barnes C, Shechtman E, Finkelstein A, Dan BG (2009) Patchmatch:a randomized correspondence algorithm for structural image editing. Acm Trans Graph 28(3, article 24):1–11CrossRefGoogle Scholar
  5. 5.
    Barnes C, Shechtman E, Goldman DB, Finkelstein A (2010) The generalized patchmatch correspondence algorithm. In: European Conference on computer vision conference on computer vision, pp 29–43Google Scholar
  6. 6.
    Black MJ, Anandan P (1991) Robust dynamic motion estimation over time. In: Computer vision and pattern recognition, 1991. IEEE, pp 296–302Google Scholar
  7. 7.
    Black MJ, Anandan P (1996) The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput Vis Image Understand 63(1):75–104CrossRefGoogle Scholar
  8. 8.
    Bouguet JY (2001) Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm. Intel Corp 5(1–10):4Google Scholar
  9. 9.
    Brox T, Malik J (2011) Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans Pattern Anal Mach Intell 33(3):500–513CrossRefGoogle Scholar
  10. 10.
    Brox T, Bruhn A, Papenberg N, Weickert J (2004) High accuracy optical flow estimation based on a theory for warping. In: European conference on computer vision (ECCV), vol 3024. Springer, pp 25–36Google Scholar
  11. 11.
    Bruhn A, Weickert J, Schnörr C (2005) Lucas/kanade meets horn/schunck: combining local and global optic flow methods. Int J Comput Vis 61(3):211–231CrossRefGoogle Scholar
  12. 12.
    Butler DJ, Wulff J, Stanley GB, Black MJ (2012) A naturalistic open source movie for optical flow evaluation. In: European conference on computer vision, pp 611–625Google Scholar
  13. 13.
    Dosovitskiy A, Fischery P, Ilg E et al (2015) FlowNet: learning optical flow with convolutional Networks[C]. In: IEEE international conference on computer vision. IEEE, pp 2758–2766Google Scholar
  14. 14.
    Drayer B, Brox T (2015) Combinatorial regularization of descriptor matching for optical flow estimation. In: BMVC, pp 42–1Google Scholar
  15. 15.
    Geiger A (2012) Are we ready for autonomous driving? The kitti vision benchmark suite. In: Computer vision and pattern recognition, pp 3354–3361Google Scholar
  16. 16.
    Horn BKP, Schunck BG (1980) Determining optical flow. Artif Intell 17(1C3):185–203Google Scholar
  17. 17.
    Hosni A, Rhemann C, Bleyer M, Rother C, Gelautz M (2013) Fast cost-volume filtering for visual correspondence and beyond. IEEE Trans Pattern Anal Mach Intell 35(2):504–511CrossRefGoogle Scholar
  18. 18.
    Hu Y, Song R, Li Y (2016) Efficient coarse-to-fine patch match for large displacement optical flow. In: IEEE Conference on computer vision and pattern recognition, pp 5704–5712Google Scholar
  19. 19.
    Jabid T, Kabir MH, Chae O (2010) Local directional pattern (ldp) for face recognition. In: 2010 Digest of technical papers international conference on consumer electronics (ICCE). IEEE, pp 329–330Google Scholar
  20. 20.
    Kennedy R, Taylor CJ (2015) Optical flow with geometric occlusion estimation and fusion of multiple frames. In: International workshop on energy minimization methods in computer vision and pattern recognition. Springer, pp 364–377Google Scholar
  21. 21.
    Li Y, Min D, Do MN, Lu J (2016) Fast guided global interpolation for depth and motion. In: European conference on computer vision. Springer, pp 717–733Google Scholar
  22. 22.
    Liu C, Yuen J, Torralba A (2011) Sift flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994CrossRefGoogle Scholar
  23. 23.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110MathSciNetCrossRefGoogle Scholar
  24. 24.
    Lu J, Yang H, Min D, Do MN (2013) Patch match filter: efficient edge-aware filtering meets randomized search for fast correspondence field estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1854–1861Google Scholar
  25. 25.
    Lucas BD, Kanade T (1981) An iterative image registration technique with an application to stereo vision. In: International joint conference on artificial intelligence, pp 674–679Google Scholar
  26. 26.
    Menze M, Geiger A (2015) Object scene flow for autonomous vehicles. In: Computer vision and pattern recognition, pp 3061–3070Google Scholar
  27. 27.
    Muja M (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: International conference on computer vision theory and application vissapp, pp 331–340Google Scholar
  28. 28.
    Revaud J, Weinzaepfel P, Harchaoui Z, Schmid C (2015) Epicflow: edge-preserving interpolation of correspondences for optical flow. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1164–1172Google Scholar
  29. 29.
    Roth S, Lempitsky V, Rother C (2009) Discrete-continuous optimization for optical flow estimation. In: Statistical and geometrical approaches to visual motion analysis. Springer, pp 1–22Google Scholar
  30. 30.
    Rublee E, Rabaud V, Konolige K, Bradski G (2011) Orb: an efficient alternative to sift or surf. In: 2011 IEEE International conference on computer vision (ICCV). IEEE, pp 2564–2571Google Scholar
  31. 31.
    Sun D, Roth S, Black MJ (2014) A quantitative analysis of current practices in optical flow estimation and the principles behind them. Int J Comput Vis 106(2):115–137CrossRefGoogle Scholar
  32. 32.
    Timofte R, Gool LV (2015) Sparse flow: sparse matching for small to large displacement optical flow. In: IEEE Winter conference on applications of computer vision, pp 1100–1106Google Scholar
  33. 33.
    Wang S, Ryan Fanello S, Rhemann C, Izadi S, Kohli P (2016) The global patch collider. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 127–135Google Scholar
  34. 34.
    Weinzaepfel P, Revaud J, Harchaoui Z, Schmid C (2013) Deepflow: large displacement optical flow with deep matching. In: IEEE International conference on computer vision, pp 1385–1392Google Scholar
  35. 35.
    Xu L, Jia J, Matsushita Y (2010) Motion detail preserving optical flow estimation. In: Computer vision and pattern recognition, pp 1293–1300Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.School of Computer Science and EngineeringBeihang UniversityBeijingChina
  2. 2.National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT)BeijingChina
  3. 3.School of Aeronautic Science and EngineeringBeihang UniversityBeijingChina
  4. 4.Institute of Computing Technology Chinese Academy of SciencesBeijingChina

Personalised recommendations