Advertisement

An Image Mosaic Method Based on Convolutional Neural Network Semantic Features Extraction

  • Zaifeng ShiEmail author
  • Hui Li
  • Qingjie Cao
  • Huizheng Ren
  • Boyu Fan
Article
  • 9 Downloads

Abstract

Since traditional image feature extraction methods rely on features such as corner points, a new method based on semantic feature extraction is proposed inspiring by convolution neural attack. The semantic features of each pixel in an image are computed and quantified by neural network to represent the contribution of each pixel to the image semantics. According to the quantization results, the semantic contribution values of each pixel are sorted, and the semantic feature points are selected from high to low and the image mosaic is completed. Experimental results show that this method can effectively extract image features and complete image mosaic.

Keywords

Image mosaic Image feature extraction Convolutional neural network Neural network attack 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (NO.61674115), and the Natural Science Foundation of Tianjin City, China (No.17JCYBJC15900).

References

  1. 1.
    Szeliski, R. (1996). Video mosaics for virtual environments. IEEE Computer Graphics and Applications, 16(2), 22–30.CrossRefGoogle Scholar
  2. 2.
    Peleg, S., Rousso, B., Rav-Acha, A., et al. (2000). Mosaicing on Adaptive Manifolds. IEEE Trans on Pami, 22(10), 1144–1154.CrossRefGoogle Scholar
  3. 3.
    Zokai, S., & Wolberg, G. (2005). Image registration using log-polar mappings for recovery of large-scale similarity and projective transformations. IEEE Transactions on Image Processing, 14(10), 1422–1434.MathSciNetCrossRefGoogle Scholar
  4. 4.
    Pratt, W. (1974). Correlation Techniques of Image Registration. IEEE Trans Aes, 10(3), 353–358.Google Scholar
  5. 5.
    Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proceedings of Fourth Alvey Vision Conference (pp. 147–151).Google Scholar
  6. 6.
    Dalal, N., Triggs, B. (2005). Histograms of oriented gradients for human detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 886-893.Google Scholar
  7. 7.
    Lowe, D.,. G. (1999). Object Recognition from Local Scale-Invariant Features. IEEE International Conference on Computer Vision, 1150.Google Scholar
  8. 8.
    Bay, H., Ess, A., Tuytelaars, T., et al. (2008). Speeded-Up Robust Features. Computer Vision and Image Understanding, 110(3), 404–417.CrossRefGoogle Scholar
  9. 9.
    Lécun, Y., Bottou, L., Bengio, Y., et al. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.CrossRefGoogle Scholar
  10. 10.
    Simonyan, K., Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. International Conference of Learning Representation.Google Scholar
  11. 11.
    Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Conference on Computer Vision and Pattern Recognition. Google Scholar
  12. 12.
    Ren, S., He, K., Girshick, R., et al. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), 1137–1149.CrossRefGoogle Scholar
  13. 13.
    Sarkar, S., Venugopalan, V., Reddy, K., et al. (2017). Deep Learning for Automated Occlusion Edge Detection in RGB-D Frames. Journal of Signal Processing Systems, 88(2), 205–217.CrossRefGoogle Scholar
  14. 14.
    Nakjai, P., & Katanyukul, T. (2018). Hand Sign Recognition for Thai Finger Spelling: An Application of Convolution Neural Network. Journal of Signal Processing Systems, 91(3), 131–146.Google Scholar
  15. 15.
    Long, J., Shelhamer, E., & Darrell, T. (2014). Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640–651.Google Scholar
  16. 16.
    Zheng, S., Jayasumana, S., Romera-Paredes, B., et al. (2015). Conditional Random Fields as Recurrent Neural Networks, 2015 IEEE International Conference on Computer Vision, 1529-1537.Google Scholar
  17. 17.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. International Conference on Neural Information Processing Systems, 1097–1105.Google Scholar
  18. 18.
    Szegedy, C., Zaremba, W., Sutskever, I., et al. (2013). Intriguing properties of neural networks. International Conference of Learning Representation, 2014, 1–9.Google Scholar
  19. 19.
    Moosavidezfooli, S. M., Fawzi, A., & Frossard, P. (2016). DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. Computer Vision and Pattern Recognition, 2574–2582.Google Scholar
  20. 20.
    Goodfellow, I. (2014). J., Shlens, J., Szegedy, C. Explaining and Harnessing Adversarial Examples. International Conference of Learning Representation, 2015, 1–11.Google Scholar
  21. 21.
    Papernot, N., Mcdaniel, P., Jha, S., et al. (2016). The Limitations of Deep Learning in Adversarial Settings. IEEE European Symposium on Security and Privacy, 372–387.Google Scholar
  22. 22.
    Papernot, N., Mcdaniel, P., Goodfellow, I., et al. (2017). Practical Black-Box Attacks against Machine Learning, Asia CCS (pp. 506–519).Google Scholar
  23. 23.
    Narodytska, N., Kasiviswanathan, S. (2017). Simple Black-Box Adversarial Attacks on Deep Neural Networks. Computer Vision and Pattern Recognition Workshops, 1310-1318.Google Scholar
  24. 24.
    Li, J., Wang, Z. M., Lai, S. M., et al. (2018). Parallax-Tolerant Image Stitching Based on Robust Elastic Warping. IEEE Transactions on Multimedia, 20(7), 1672–1687.CrossRefGoogle Scholar
  25. 25.
    Brown, M., Lowe, D. G. (2003). Recognising Panoramas. Brown, M., & Lowe, D. G. (2003). Recognising Panoramas. 9th IEEE International Conference on Computer Vision (ICCV 2003).Google Scholar
  26. 26.
    Brown, M., & Lowe, D. G. (2007). Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1), 59–73.CrossRefGoogle Scholar
  27. 27.
    Gao, J., Kim, S. J., Brown, M. S. (2011). Constructing image panoramas using dual-homography warping. 2011 IEEE Conference on Computer Vision & Pattern Recognition (CVPR). Google Scholar
  28. 28.
    Verdie, Y., Yi, K. M., Fua, P., Lepetit, V. (2015). Tilde: a temporally invariant learned detector. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Google Scholar
  29. 29.
    Yi, K. M., Verdie, Y., Fua, P., Lepetit, V. (2015). Learning to Assign Orientations to Feature Points, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Google Scholar
  30. 30.
    Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F. (2015). Discriminative Learning of Deep Convolutional Feature Point Descriptors. 2015 IEEE International Conference on Computer Vision (ICCV). IEEE Computer Society. Google Scholar
  31. 31.
    Domingos, P. (2012). A few useful things to know about machine learning. Communications of the ACM, 55(10), 78–87.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of MicroelectronicsTianjin UniversityTianjinChina
  2. 2.Tianjin Key Lab of Imaging & Sensing Microelectronics TechnologyTianjinChina
  3. 3.School of Mathematical SciencesTianjin Normal UniversityTianjinChina

Personalised recommendations