Image Matching for Space Objects Based on Grid-Based Motion Statistics

  • Shanlan Nie
  • Zhiguo Jiang
  • Haopeng ZhangEmail author
  • Quanmao Wei
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 875)


Image matching for space objects has attracted wide attention for its importance in applications. Major challenges for this task include the textureless appearance and symmetrical structure of space objects. In this paper, we propose a novel image matching method, aiming to improve the image matching quality for space objects. Our approach consists of three main components, which are grid-based motion statistic (GMS), a contrario-random sample consensus (AC-RANSAC), and constraint of three-view. First of all, GMS is utilized to generate a collection of corresponding points. Subsequently, we adopt AC-RANSAC to eliminate false matches and estimate fundamental matrix. In the end, accurate matches are obtained under the constraint of three-view. Experimental results on simulated images of space objects have quantitatively and qualitatively demonstrated the effectiveness of our approach.


Image matching Space objects Grid-based motion statistics AC-RANSAC 



This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61501009, 61771031 and 61371134), the National Key Research and Development Program of China (2016YFB0501300, 2016YFB0501302) and the Aerospace Science and Technology Innovation Fund of CASC (China Aerospace Science and Technology Corporation).


  1. 1.
    Zhang, H., Jiang, Z., Elgammal, A.: Satellite recognition and pose estimation using homeomorphic manifold analysis. IEEE Trans. Aerosp. Electron. Syst. 51(1), 785–792 (2015)CrossRefGoogle Scholar
  2. 2.
    Zhang, H., Wei, Q., Jiang, Z.: 3D reconstruction of space objects from multi-views by a visible sensor. Sensors 17(7), 1689 (2017)CrossRefGoogle Scholar
  3. 3.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  4. 4.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  5. 5.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: 2011 International Conference on Computer Vision, pp. 2564–2571, November 2011Google Scholar
  6. 6.
    Yi, K.M., Trulls, E., Lepetit, V., Fua, P.: LIFT: learned invariant feature transform. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 467–483. Springer, Cham (2016). Scholar
  7. 7.
    Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M.: GMS: grid-based motion statistics for fast, ultra-robust feature correspondence. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2828–2837, July 2017Google Scholar
  8. 8.
    Moulon, P., Monasse, P., Marlet, R.: Adaptive structure from motion with a Contrario model estimation. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012. LNCS, vol. 7727, pp. 257–270. Springer, Heidelberg (2013). Scholar
  9. 9.
    Cong Li, Hong Rui Zhao, and Gang Fu. 3-D reconstruction of image sequence based on independent three-view. In: Advanced Materials Research, vol. 989, pp. 3844–3850. Trans Tech Publications (2014)Google Scholar
  10. 10.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. In: Readings in Computer Vision, pp. 726–740. Elsevier (1987)Google Scholar
  11. 11.
    Moisan, L., Stival, B.: A probabilistic criterion to detect rigid point matches between two images and estimate the fundamental matrix. Int. J. Comput. Vis. 57(3), 201–218 (2004)CrossRefGoogle Scholar
  12. 12.
    Moisan, L., Moulon, P., Monasse, P.: Automatic homographic registration of a pair of images, with a contrario elimination of outliers. Image Process. On Line 2, 56–73 (2012)CrossRefGoogle Scholar
  13. 13.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  14. 14.
    Gang, M., Zhiguo, J., Zhengyi, L., Haopeng, Z., Danpei, Z.: Full-viewpoint 3D space object recognition based on kernel locality preserving projections. Chin. J. Aeronaut. 23(5), 563–572 (2010)CrossRefGoogle Scholar
  15. 15.
    Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Shanlan Nie
    • 1
    • 2
  • Zhiguo Jiang
    • 1
    • 2
  • Haopeng Zhang
    • 1
    • 2
    Email author
  • Quanmao Wei
    • 1
    • 2
  1. 1.Image Processing Center, School of AstronauticsBeihang UniversityBeijingPeople’s Republic of China
  2. 2.Beijing Key Laboratory of Digital MediaBeijingPeople’s Republic of China

Personalised recommendations