Infrared blind-pixel compensation algorithm based on generative adversarial networks and Poisson image blending
- 6 Downloads
Infrared focal plane array is affected by nonuniformity, which leads to the generation of infrared image blind pixels and reduces the image quality. This paper puts forward an infrared blind-pixel compensation algorithm based on generative adversarial networks and Poisson image blending (GAN–PIB), which implements the greyscale prediction of the original image blind pixels through the combination of the pre-trained adversarial network and the blind-pixel compensation function and in the form of generating new images, breaking through the traditional thinking of the existing compensation algorithm based on interpolation and filtering. Firstly, the blind-pixel compensation network is constructed based on the generative adversarial networks. Through training, the model learns the image features of infrared blind pixels and achieves a better compensation effect for the blind-pixel image data sets which are used for training. Secondly, the blind-pixel detection of the test blind-pixel images is carried out to generate a binary mask matrix, and the constructed blind-pixel compensation loss function is combined to create the fake image. Finally, by fitting the original blind-pixel image and the generated image with the Poisson image blending algorithm, the compensation precision is improved through iteration, and finally, the compensation for the infrared blind pixels is completed. The experimental results show that GAN–PIB algorithm has strong adaptability to isolated blind pixels and clustered blind pixels, and compared with traditional algorithms, the compensated images have higher intelligibility and texture details.
KeywordsInfrared image Nonuniformity Blind-pixel compensation Generative adversarial networks Poisson image blending
This work was supported by National Natural Science Foundation of China (61705019), and Natural Science Foundation of the Jiangsu Higher Education Institutions of China (12KJA510001).
SC curated the data; MJ proposed the methodology; SC, CZ and YZ supervised the study; MJ wrote the original draft; SC and MJ reviewed and edited the manuscript.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflicts of interest.
- 2.Faraklioti, M., Petrou, M.: Multiresolution versus single resolution horizon picking in 2D seismic images. Proc. SPIE Int. Soc. Opt. Eng. 5238(6), 1165–1172 (2004)Google Scholar
- 3.Wei, Y., Shi, Z.: An automatic target detection algorithm based on wavelet analysis for infrared image small target in background of sea and sky. Proc. SPIE Int. Soc. Opt. Eng. 2003, 123–131 (2003)Google Scholar
- 5.Goma, S., Aleksic, M.: Bad pixel location algorithm for cell phone cameras. Proc SPIE 6502, 65020H-65020H-10 (2007)Google Scholar
- 6.Iwamoto, M., Ueyama, D.: Autonomously detecting the defective pixels in an imaging sensor array using a robust statistical technique. Proc. SPIE Int. Soc. Opt. Eng. 6808, 680813-680813-12 (2008)Google Scholar
- 7.Dierickx, B., Meynants, G.: Missing pixel correction algorithm for image sensors. Proc. SPIE Int. Soc. Opt. Eng. 3410, 200–203 (1999)Google Scholar
- 9.Cao, Y., et al.: Scene-based bad pixel dynamic correction and evaluation for IRFPA device. In: Advances in Optoelectronics and Micro/Nano-Optics (AOM), 2010 OSA-IEEE-COS (2010), pp. 1–4Google Scholar
- 11.Ledig, C., Theis, L., Huszar, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)Google Scholar
- 12.Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017)Google Scholar
- 13.Divakar, N., Babu, R.V.: Image denoising via CNNs: an adversarial approach. In: Computer Vision and Pattern Recognition Workshops IEEE, pp. 1076–1083 (2017)Google Scholar
- 14.Harmeling, S., Schuler, C.J., Burger, H.C.: Image denoising: can plain neural networks compete with BM3D? In: 2012 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society (2012)Google Scholar
- 15.Goodfellow, I.J., et al.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems, pp. 2672–2680. MIT Press (2014)Google Scholar
- 16.Ratliff, L.J., Burden, S.A., Sastry, S.S.: Characterization and computation of local Nash equilibria in continuous games. In: Communication, Control, and Computing, pp. 917–924. IEEE (2006)Google Scholar
- 17.Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning (ICML), pp. 214–223M (2017)Google Scholar
- 18.Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems (NIPS) (2017)Google Scholar
- 19.Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
- 20.Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016)
- 21.Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: International Conference on Machine Learning, vol. 30 (2013)Google Scholar
- 22.Xu, B., et al.: Empirical evaluation of rectified activations in convolutional network (2015). arXiv preprint arXiv:1505.00853
- 25.Pathak, D., et al.: Context encoders: feature learning by inpainting. In: IEEE conference on computer vision and pattern recognition, pp. 2536–2544. IEEE Computer Society (2016)Google Scholar
- 27.Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (2016)Google Scholar
- 28.Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967–5976 (2017). arXiv:1611.07004