Advertisement

Infrared blind-pixel compensation algorithm based on generative adversarial networks and Poisson image blending

  • Suting ChenEmail author
  • Meng Jin
  • Yanyan Zhang
  • Chuang Zhang
Original Paper
  • 6 Downloads

Abstract

Infrared focal plane array is affected by nonuniformity, which leads to the generation of infrared image blind pixels and reduces the image quality. This paper puts forward an infrared blind-pixel compensation algorithm based on generative adversarial networks and Poisson image blending (GAN–PIB), which implements the greyscale prediction of the original image blind pixels through the combination of the pre-trained adversarial network and the blind-pixel compensation function and in the form of generating new images, breaking through the traditional thinking of the existing compensation algorithm based on interpolation and filtering. Firstly, the blind-pixel compensation network is constructed based on the generative adversarial networks. Through training, the model learns the image features of infrared blind pixels and achieves a better compensation effect for the blind-pixel image data sets which are used for training. Secondly, the blind-pixel detection of the test blind-pixel images is carried out to generate a binary mask matrix, and the constructed blind-pixel compensation loss function is combined to create the fake image. Finally, by fitting the original blind-pixel image and the generated image with the Poisson image blending algorithm, the compensation precision is improved through iteration, and finally, the compensation for the infrared blind pixels is completed. The experimental results show that GAN–PIB algorithm has strong adaptability to isolated blind pixels and clustered blind pixels, and compared with traditional algorithms, the compensated images have higher intelligibility and texture details.

Keywords

Infrared image Nonuniformity Blind-pixel compensation Generative adversarial networks Poisson image blending 

Notes

Acknowledgements

This work was supported by National Natural Science Foundation of China (61705019), and Natural Science Foundation of the Jiangsu Higher Education Institutions of China (12KJA510001).

Author contributions

SC curated the data; MJ proposed the methodology; SC, CZ and YZ supervised the study; MJ wrote the original draft; SC and MJ reviewed and edited the manuscript.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflicts of interest.

References

  1. 1.
    Mooney, J.M., et al.: Responsivity nonuniformity limited performance of infrared staring cameras. Opt. Eng. 28(11), 1151–1161 (1989)CrossRefGoogle Scholar
  2. 2.
    Faraklioti, M., Petrou, M.: Multiresolution versus single resolution horizon picking in 2D seismic images. Proc. SPIE Int. Soc. Opt. Eng. 5238(6), 1165–1172 (2004)Google Scholar
  3. 3.
    Wei, Y., Shi, Z.: An automatic target detection algorithm based on wavelet analysis for infrared image small target in background of sea and sky. Proc. SPIE Int. Soc. Opt. Eng. 2003, 123–131 (2003)Google Scholar
  4. 4.
    Liu, N., Xie, J.: Interframe phase-correlated registration scene-based nonuniformity correction technology. Infrared Phys. Technol. 69, 198–205 (2015)CrossRefGoogle Scholar
  5. 5.
    Goma, S., Aleksic, M.: Bad pixel location algorithm for cell phone cameras. Proc SPIE 6502, 65020H-65020H-10 (2007)Google Scholar
  6. 6.
    Iwamoto, M., Ueyama, D.: Autonomously detecting the defective pixels in an imaging sensor array using a robust statistical technique. Proc. SPIE Int. Soc. Opt. Eng. 6808, 680813-680813-12 (2008)Google Scholar
  7. 7.
    Dierickx, B., Meynants, G.: Missing pixel correction algorithm for image sensors. Proc. SPIE Int. Soc. Opt. Eng. 3410, 200–203 (1999)Google Scholar
  8. 8.
    Nguyen, C.T., Mould, N., Regens, J.L.: Dead pixel correction techniques for dual-band infrared imagery. Infrared Phys. Technol. 71, 227–235 (2015)CrossRefGoogle Scholar
  9. 9.
    Cao, Y., et al.: Scene-based bad pixel dynamic correction and evaluation for IRFPA device. In: Advances in Optoelectronics and Micro/Nano-Optics (AOM), 2010 OSA-IEEE-COS (2010), pp. 1–4Google Scholar
  10. 10.
    Chen, S., Meng, H., Pei, T., et al.: An adaptive regression method for infrared blind-pixel compensation. Infrared Phys. Technol. 85, 443–449 (2017)CrossRefGoogle Scholar
  11. 11.
    Ledig, C., Theis, L., Huszar, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)Google Scholar
  12. 12.
    Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017)Google Scholar
  13. 13.
    Divakar, N., Babu, R.V.: Image denoising via CNNs: an adversarial approach. In: Computer Vision and Pattern Recognition Workshops IEEE, pp. 1076–1083 (2017)Google Scholar
  14. 14.
    Harmeling, S., Schuler, C.J., Burger, H.C.: Image denoising: can plain neural networks compete with BM3D? In: 2012 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society (2012)Google Scholar
  15. 15.
    Goodfellow, I.J., et al.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems, pp. 2672–2680. MIT Press (2014)Google Scholar
  16. 16.
    Ratliff, L.J., Burden, S.A., Sastry, S.S.: Characterization and computation of local Nash equilibria in continuous games. In: Communication, Control, and Computing, pp. 917–924. IEEE (2006)Google Scholar
  17. 17.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning (ICML), pp. 214–223M (2017)Google Scholar
  18. 18.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems (NIPS) (2017)Google Scholar
  19. 19.
    Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
  20. 20.
    Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
  21. 21.
    Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: International Conference on Machine Learning, vol. 30 (2013)Google Scholar
  22. 22.
    Xu, B., et al.: Empirical evaluation of rectified activations in convolutional network (2015). arXiv preprint arXiv:1505.00853
  23. 23.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)CrossRefzbMATHGoogle Scholar
  24. 24.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Pathak, D., et al.: Context encoders: feature learning by inpainting. In: IEEE conference on computer vision and pattern recognition, pp. 2536–2544. IEEE Computer Society (2016)Google Scholar
  26. 26.
    Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2014)CrossRefGoogle Scholar
  27. 27.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (2016)Google Scholar
  28. 28.
    Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967–5976 (2017). arXiv:1611.07004
  29. 29.
    Rez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Jiangsu Key Laboratory of Meteorological Observation and Information ProcessingNanjing University of Information Science and TechnologyNanjingChina
  2. 2.Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET)Nanjing University of Information Science and TechnologyNanjingChina

Personalised recommendations