Skip to main content

Egomotion Estimation Using Background Feature Point Matching in OpenCV Environment

  • Conference paper
  • First Online:
  • 864 Accesses

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 446))

Abstract

Autonomous systems are generally equipped with multiple sensor (such as radar, ultrasonic, IMU (Inertial Measurement Units), cameras, and GPS) assembly. Under complex scenarios, control of unmanned systems in GPS denied environment that depends on a quick estimate of their current position in space using cameras. Cameras provide information similar to human vision with an advantage of small construction space at low cost. Thus, estimating a camera’s egomotion from an image sequence helps to overcome these practical difficulties of autonomous camera control. The main disadvantage of using cameras under dynamic environments includes unwanted movement and jittering in the captured data, which causes a consequence in embedded vision applications. In this paper, we described an algorithm of feature-based high frame rate egomotion estimation with gradient projection and Gabor wavelet transform, which is capable of computing real-time computer vision applications. Here, the reliable singularity points were extracted through gradient projection for reducing the processing time, and egomotion was derived by applying RANSAC. The simulation was carried out in OpenCV environment, and the results demonstrate the efficiency of the proposed technique.

This is a preview of subscription content, log in via an institution.

References

  1. Wolf W, Ozer B, Lv, T (2002) Smart cameras as embedded systems. Computer 35:48–53 (Long. Beach. Calif)

    Google Scholar 

  2. Borenstein J, Everett HR, Feng L, Wehe D (1997) Mobile robot positioning: sensors and techniques. J Robot Syst 14:231–249

    Article  Google Scholar 

  3. Zanni L (2006) An improved gradient projection-based decomposition technique for support vector machines. Comput Manag Sci 3:131–145

    Article  MathSciNet  MATH  Google Scholar 

  4. Jiang W, Shen, TZ, Zhang J, Hu Y, Wang XY (2008) Gabor wavelets for image processing. In: Proceeding-ISECS international colloquium on computer, communication control, and management CCCM 2008, vol 1, pp 110–114

    Google Scholar 

  5. Fischler MA, Bolles, RC (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24:381–395

    Google Scholar 

  6. Beauchemin SS, Barron JL (1995) The computation of optical flow. ACM Comput Surv 27:433–466

    Article  Google Scholar 

  7. Spies H, Scharr H (2001) Accurate optical flow in noisy image sequences. In: Eighth IEEE international conference on computer vision 1, vol 1, pp 587–592

    Google Scholar 

  8. Rutkowski AJ, Miller MM, Quinn RD, Willis MA (2011) Egomotion estimation with optic flow and air velocity sensors. Biol Cybern 104:351–367

    Article  MathSciNet  MATH  Google Scholar 

  9. Raudies F, Neumann H (2009) An efficient linear method for the estimation of ego-motion from optical flow. In: Lecture notes in computer science (including subseries lecture notes artificial intelligence Lecture Notes Bioinformatics), vol 5748 LNCS, pp 11–20

    Google Scholar 

  10. Grabe V, Bulthoff HH, Robuffo Giordano P (2012) Robust optical-flow based self-motion estimation for a quadrotor UAV. In: IEEE international conference on intelligent robots and systems, pp 2153–2159

    Google Scholar 

  11. Dahmen H, Franz M, Krapp H (2001) Extracting egomotion from optic flow: limits of accuracy and neural matched filters. In: Motion vision, pp 143–168

    Google Scholar 

  12. Brooks MJ, Baumela L, Chojnacki W (1997) Egomotion from optical flow with an uncalibrated camera. In: Proceeding SPIE-international society for optics engineering, vol 3024, pp 220–228

    Google Scholar 

  13. Scaramuzza D, Fraundorfer F (2011) Visual odometry part II. IEEE Robot Autom Mag 18:80–92

    Article  Google Scholar 

  14. Nister D, Bergen J (2004) Visual odometry. In: Proceedings of the 2004 IEEE computer society conference on computer vision pattern recognition, CVPR 2004

    Google Scholar 

  15. Black MJ, Anandan P A framework for the robust estimation of optical flow. In: IEEE international conference on computer vision, pp 231–236

    Google Scholar 

  16. Wildes RP (1993) On the qualitative structure of temporally visual motion fields, pp 844–849

    Google Scholar 

  17. Gupta NC, Kanal LN (1995) 3-D motion estimation from motion field. J Artif Intel 78:45–86

    Google Scholar 

  18. Szeliski R (2006) Image alignment and stitching: a tutorial. Found Trends® Comput. Graph Vis 2:1–104

    Google Scholar 

  19. Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the seventh IEEE international conference on computer vision, vol 2, pp 1150–1157

    Google Scholar 

  20. Nister D (2004) An efficient solution to the five-point relative pose problem. IEEE Trans Pattern Anal Mach Intell 26:756–770

    Article  Google Scholar 

  21. Pollefeys M, Nistér D, Frahm J-M (2008) Detailed real-time urban 3D reconstruction from video. Int J Comput Vis 78:1–43

    Google Scholar 

  22. Se S, Lowe D, Little J (2002) Global localization using distinctive visual features. In: Intelligent robots and systems, IEEE/RSJ, pp 226–231

    Google Scholar 

  23. Nister D (2003) Preemptive RANSAC for live structure and motion estimation,  Proc Ninth IEEE Intl Conf on Comput Vis 1:199–206

    Google Scholar 

  24. Nister D (2004) Automatic passive recovery of 3D from images and video. In: 3D data processing, visualization and transmission. 3DPVT 2004. Proceedings. 2nd international symposium, pp 438–445

    Google Scholar 

  25. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) Speeded-up robust features (SURF). Comput Vis Image Underst 110:346–359

    Article  Google Scholar 

  26. Trajkovii M, Hedley M, Trajkovic M, Hedley M (1998) Fast corner detection. Image Vis Comput 16:75–87

    Article  Google Scholar 

  27. Rublee E, Bradski G (2011) ORB—an efficient alternative to SIFT or SURF, In: International Conference on Computer Vision (ICCV), pp 2564–2571

    Google Scholar 

  28. Pradeep V (2010) Egomotion using assorted features, Intl J Comput Vis 98(2):202-216

    Google Scholar 

  29. Lourenco M, Baretto JP, Malti A (2010) Feature detection and matching in images with radial distortion. In: IEEE International Conference on Robotics and Automation, pp 1028–1034

    Google Scholar 

  30. Kobayashi T, Hidaka A, Kurita T (2008) Selection of histograms of oriented gradients features for pedestrian detection. In: Lecture notes in computer science (including subseries lecture notes artificial intelligence lecture notes bioinformatics), 4985 LNCS, pp 598–607

    Google Scholar 

  31. Chen Y, Yeh T (2009) A method for extraction and recognition of isolated license plate characters. J Comput Sci 5:1–10

    Article  Google Scholar 

  32. An Y, Rasheed W, Park S, Park J (2011) Feature extraction through generalization of histogram refinement technique for local region-based object attributes. Int J Imaging Syst Technol 21:298–306

    Article  Google Scholar 

  33. Jemaa YB, Khanfir S (2009) Automatic local Gabor features extraction for face recognition. Int J Comput Sci Inform Secur 3(1):1–10

    Google Scholar 

  34. Kamarainen JK, Kyrki V, Kälviäinen H (2006) Invariance properties of Gabor filter-based features—overview and applications. IEEE Trans Image Process 15:1088–1099

    Article  Google Scholar 

  35. Lee TS (1996) Image representation using 2D Gabor wavelets. IEEE Trans Pattern Anal Mach Intell 18:959–971

    Article  Google Scholar 

  36. Pang WM (2010) Towards fast gabor wavelet feature extraction for texture segmentation by filter approximation. In: Proceeding-9th IEEE/ACIS international conference on computer and information science ICIS 2010, pp 252–257

    Google Scholar 

  37. Nestares O (1998) Efficient spatial-domain implementation of a multiscale image representation based on Gabor functions. J Electron Imaging 7:166

    Article  Google Scholar 

  38. Yang S-W, Wang C-C (2009) Multiple-model RANSAC for ego-motion estimation in highly dynamic environments. In: IEEE international conference on robotics and automation, pp 3531–3538

    Google Scholar 

  39. Hong S, Ye C (2014) A fast egomotion estimation method based on visual feature tracking and iterative closest point. In: IEEE 11th international conference on networking, sensing and control (ICNSC), pp 114–119

    Google Scholar 

  40. Del-Blanco CR, Jaureguizar F, Salgad L, García N (2008) Motion estimation through efficient matching of a reduced number of reliable singular points. In: Proceeding, vol 6811, pp 68110 N–68110 N–12

    Google Scholar 

  41. Soman KP, Ramanathan R (2016) Digital signal and image processing. pp 408–416

    Google Scholar 

  42. Wei C, Li Y, Li C Effective extraction of Gabor features. IEEE International Conference on Multimedia and Expo. pp 1503–1506

    Google Scholar 

  43. Daugman JG (1985) Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J Opt Soc Am A 2:1160–1169

    Article  Google Scholar 

  44. Amayeh G, Tavakkoli A, Bebis G (2009) Accurate and efficient computation of Gabor features in real-time applications. In: Lecture notes in computer science (including subseries lecture notes artificial intelligence lecture notes bioinformatics), vol 5875 LNCS, pp 243–252

    Google Scholar 

  45. Brown M, Lowe DG (2007) Automatic panoramic image stitching using invariant features. Int J Comput Vis 74:59–73

    Article  Google Scholar 

  46. Wisetphanichkij S, Dejhan K (2005) Fast fourier transform technique and affine transform estimation-based high precision image registration method. GESTS Int Trans Comput Sci Eng 20:179–191

    Google Scholar 

Download references

Acknowledgements

The authors are grateful to the Department of Science and Technology for the award of a DST-INSPIRE Fellowship to carry out this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nedumaran Damodaran .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharmila Bakthavachalam, Damodaran, N. (2018). Egomotion Estimation Using Background Feature Point Matching in OpenCV Environment. In: Bhuvaneswari, M., Saxena, J. (eds) Intelligent and Efficient Electrical Systems. Lecture Notes in Electrical Engineering, vol 446. Springer, Singapore. https://doi.org/10.1007/978-981-10-4852-4_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-4852-4_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-4851-7

  • Online ISBN: 978-981-10-4852-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics