Advertisement

A lightweight scheme for multi-focus image fusion

  • Xin Jin
  • Jingyu Hou
  • Rencan Nie
  • Shaowen Yao
  • Dongming Zhou
  • Qian Jiang
  • Kangjian He
Article

Abstract

The aim of multi-focus image fusion is to fuse the images taken from the same scene with different focuses so that we can obtain a resultant image with all objects in focus. However, the most existing techniques in many cases cannot gain good fusion performance and acceptable complexity simultaneously. In order to improve image fusion efficiency and performance, we propose a lightweight multi-focus image fusion scheme based on Laplacian pyramid transform (LPT) and adaptive pulse coupled neural networks-local spatial frequency (PCNN-LSF), and it only needs to deal with fewer sub-images than common methods. The proposed scheme employs LPT to decompose a source image into the corresponding constituent sub-images. Spatial frequency (SF) is calculated to adjust the linking strength β of PCNN according to the gradient features of the sub-images. Then oscillation frequency graph (OFG) of the sub-images is generated by PCNN model. Local spatial frequency (LSF) of the OFG is calculated as the key step to fuse the sub-images. Incorporating LSF of the OFG into the fusion scheme (LSF of the OFG represents the information of its regional features); it can effectively describe the detailed information of the sub-images. LSF can enhance the features of OFG and makes it easy to extract high quality coefficient of the sub-image. The experiments indicate that the proposed scheme achieves good fusion effect and is more efficient than other commonly used image fusion algorithms.

Keywords

Image processing Image fusion Pulse coupled neural networks Laplacian pyramid transform Spatial frequency 

Notes

Acknowledgements

The authors thank the editors and the anonymous reviewers for their careful works and valuable suggestions for this study. The authors thank O. rockinger, G. Easley et al., and da Cunha AL et al. for their kindly sharing of program. This study is supported by the National Natural Science Foundation of China (No. 61365001, No. 61463052 and No.61640306). We thank to the support of Scientific Research Fund of Education Department of Yunnan Province (No. 2017YJS108) and Doctoral Candidate Academic Award of Yunnan Province. We also thank Dr. Shin-Jye Lee for his valuable advises.

Compliance with ethical standards

Conflict of interests

The authors declare that there is no conflict of interests regarding the publication of this manuscript. This article does not contain any studies with human participants performed by any of the authors. Informed consent was obtained from all individual participants included in the study.

References

  1. 1.
    Adu J, Gan J, Wang Y, Huang J (2013) Image fusion based on nonsubsampled contourlet transform for infrared and visible light image. Infrared Phys Technol 61:94–100CrossRefGoogle Scholar
  2. 2.
    Bavirisetti DP, Dhuli R (2016) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys Technol 76(52–64)Google Scholar
  3. 3.
    Bhateja V, Patel H, Krishnm A, Sahu A, Lay-Ekuakille A (2015) Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sensors J 15(12):6783–6790CrossRefGoogle Scholar
  4. 4.
    Bulanon DM, Burks TF, Alchanatis V (2009) Image fusion of visible and thermal images for fruit detection. Biosyst Eng 103(1):12–22CrossRefGoogle Scholar
  5. 5.
    Cheng J, Liu H, Liu T, Wang F, Li H (2015) Remote sensing image fusion via wavelet transform and sparse representation. ISPRS J Photogramm Remote Sens 104:158–173CrossRefGoogle Scholar
  6. 6.
    Eckhorn R, Reitboeck HJ, Arndt M, Dicke PW (1989) A neural network for feature linking via synchronous activity: results from cat visual cortex and from simulations. In: Cotterill RMJ (ed) Models of brain function. Cambridge Univ. Press, Cambridge, pp 255–272Google Scholar
  7. 7.
    Eckhorn R, Reitboeck HJ, Arndt M, Dicke PW (1990) Feature linking via synchronization among distributed assemblies: simulation of results from cat cortex. Neural Comput 2:293–307CrossRefGoogle Scholar
  8. 8.
    Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965CrossRefGoogle Scholar
  9. 9.
    Frejlichowski D (2010) Robert Wanat.: application of the Laplacian pyramid decomposition to the enhancement of digital dental radiographic images for the automatic person identification. Image analysis and recognition. Lect Notes Comput Sci 6112:151–160CrossRefGoogle Scholar
  10. 10.
    Gao X, Zhang H, Chen H, Li J (2015) Multi-modal image fusion based on ROI and Laplacian Pyramid, Proc. SPIE 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 94431AGoogle Scholar
  11. 11.
    Geng P, Huang M, Liu S et al (2016) Multifocus image fusion method of Ripplet transform based on cycle spinning. Multimed Tools Appl 75(17):1–11CrossRefGoogle Scholar
  12. 12.
    Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37:744–756CrossRefMATHGoogle Scholar
  13. 13.
    Hong R, Cao W, Pang J et al (2014) Directional projection based image fusion quality metric. Inf Sci 281:611–619MathSciNetCrossRefGoogle Scholar
  14. 14.
    Ji X, Zhang G (2015) Image fusion method of SAR and infrared image based on curvelet transform with adaptive weighting. Multimed Tools Appl 76(17):17633–17649Google Scholar
  15. 15.
    Jin H, Xing B, Wang L, Wang Y (2015) Fusion of remote sensing images based on pyramid decomposition with Baldwinian clonal selection optimization. Infrared Phys Technol 73:204–211CrossRefGoogle Scholar
  16. 16.
    Jin X, Nie R, Zhou D, Yao S et al (2016) A novel DNA sequence similarity calculation based on simplified pulse-coupled neural network and Huffman coding. Physica A 461:325–338MathSciNetCrossRefGoogle Scholar
  17. 17.
    Jin X, Zhou D, Yao S et al (2016) Remote sensing image fusion method in CIELab color space using nonsubsampled shearlet transform and pulse coupled neural networks. J Appl Remote Sens 10(2):025023:1–025023:18CrossRefGoogle Scholar
  18. 18.
    Jin X, Jiang Q, Yao S et al (2017) A survey of infrared and visual image fusion methods. Infrared Phys Technol 85(2017):478–501CrossRefGoogle Scholar
  19. 19.
    Johnson JL, Padgett ML (1999) PCNN models and applications. IEEE Trans Neural Netw 10(3):480–498CrossRefGoogle Scholar
  20. 20.
    Kountchev R, Rubin S, Milanova M, Kountcheva R (2015) Comparison of image decompositions through inverse difference and Laplacian pyramids. International Journal of Multimedia Data Engineering & Management Archive 6(1):19–38CrossRefGoogle Scholar
  21. 21.
    Li S, Kwok J, Wang Y (2001) Combination of images with diverse focuses using the spatial frequency. Information Fusion 2(3):169–176CrossRefGoogle Scholar
  22. 22.
    Li H, Jin X, Yang N, Yang Z (2015) The recognition of landed aircrafts based on PCNN model and affine moment invariants. Pattern Recogn Lett 51:23–29CrossRefGoogle Scholar
  23. 23.
    Li S, Kang X, Fang L, Hub J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Information Fusion 33(2017):100–112CrossRefGoogle Scholar
  24. 24.
    Monica Subashini M, Sahoo SK (2014) Pulse coupled neural networks and its applications. Expert Syst Appl 41:3965–3974CrossRefGoogle Scholar
  25. 25.
    Naidu VPS (2014) Hybrid DDCT-PCA based multi sensor image fusion. J Opt 43(1):48–16MathSciNetCrossRefGoogle Scholar
  26. 26.
    Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Information Fusion 8(2):143–156CrossRefGoogle Scholar
  27. 27.
    Qu XB, Yan JW, Xiao HZ, Zhu ZQ (2008) Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automat Sin 34(12):1508–1514CrossRefMATHGoogle Scholar
  28. 28.
    Singh S, Gupta D, Anand RS, Kumar V (2015) Nonsubsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomed Signal Process Control 18:91–101CrossRefGoogle Scholar
  29. 29.
    Vijayarajan R, Muttan S (2015) Discrete wavelet transform based principal component averaging fusion for medical images. AEU Int J Electron Commun 69(6):896–902CrossRefGoogle Scholar
  30. 30.
    Wen D, Jiang Y, Zhang Y et al (2014) Modified block-matching 3-D filter in Laplacian pyramid domain for speckle reduction. Opt Commun 322:150–154CrossRefGoogle Scholar
  31. 31.
    Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys Technol 69:53–61CrossRefGoogle Scholar
  32. 32.
    Yan C, Zhang Y, Xu J et al (2014) Efficient parallel framework for HEVC motion estimation on many-Core processors. IEEE Trans Circuits Syst Video Technol 24(12):2077–2089CrossRefGoogle Scholar
  33. 33.
    Yan C, Zhang Y, Xu J et al (2014) A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process Lett 21(5):573–576CrossRefGoogle Scholar
  34. 34.
    Yan C, Xie H, Yang D et al (2017, In press) Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell Transp Syst.  https://doi.org/10.1109/TITS.2017.2749965
  35. 35.
    Yan C, Xie H, Liu S et al (2017, In press) Effective Uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans Intell Transp Syst.  https://doi.org/10.1109/TITS.2017.2749965
  36. 36.
    Yang B, Li S (2007) Multi-focus image fusion based on spatial frequency and morphological operators. Chin Opt Lett 5(8):452–453Google Scholar
  37. 37.
    Yang Y, Tong S, Huang S, Pan L (2014) Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors 14:22408–22430CrossRefGoogle Scholar
  38. 38.
    Zhang B, Lu X, Pei H, Ying Z (2015) A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform. Infrared Phys Technol 73:286–297CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of InformationYunnan UniversityKunmingChina
  2. 2.School of Information TechnologyDeakin UniversityMelbourneAustralia
  3. 3.School of SoftwareYunnan UniversityKunmingChina

Personalised recommendations