A lightweight scheme for multi-focus image fusion
The aim of multi-focus image fusion is to fuse the images taken from the same scene with different focuses so that we can obtain a resultant image with all objects in focus. However, the most existing techniques in many cases cannot gain good fusion performance and acceptable complexity simultaneously. In order to improve image fusion efficiency and performance, we propose a lightweight multi-focus image fusion scheme based on Laplacian pyramid transform (LPT) and adaptive pulse coupled neural networks-local spatial frequency (PCNN-LSF), and it only needs to deal with fewer sub-images than common methods. The proposed scheme employs LPT to decompose a source image into the corresponding constituent sub-images. Spatial frequency (SF) is calculated to adjust the linking strength β of PCNN according to the gradient features of the sub-images. Then oscillation frequency graph (OFG) of the sub-images is generated by PCNN model. Local spatial frequency (LSF) of the OFG is calculated as the key step to fuse the sub-images. Incorporating LSF of the OFG into the fusion scheme (LSF of the OFG represents the information of its regional features); it can effectively describe the detailed information of the sub-images. LSF can enhance the features of OFG and makes it easy to extract high quality coefficient of the sub-image. The experiments indicate that the proposed scheme achieves good fusion effect and is more efficient than other commonly used image fusion algorithms.
KeywordsImage processing Image fusion Pulse coupled neural networks Laplacian pyramid transform Spatial frequency
The authors thank the editors and the anonymous reviewers for their careful works and valuable suggestions for this study. The authors thank O. rockinger, G. Easley et al., and da Cunha AL et al. for their kindly sharing of program. This study is supported by the National Natural Science Foundation of China (No. 61365001, No. 61463052 and No.61640306). We thank to the support of Scientific Research Fund of Education Department of Yunnan Province (No. 2017YJS108) and Doctoral Candidate Academic Award of Yunnan Province. We also thank Dr. Shin-Jye Lee for his valuable advises.
Compliance with ethical standards
Conflict of interests
The authors declare that there is no conflict of interests regarding the publication of this manuscript. This article does not contain any studies with human participants performed by any of the authors. Informed consent was obtained from all individual participants included in the study.
- 2.Bavirisetti DP, Dhuli R (2016) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys Technol 76(52–64)Google Scholar
- 6.Eckhorn R, Reitboeck HJ, Arndt M, Dicke PW (1989) A neural network for feature linking via synchronous activity: results from cat visual cortex and from simulations. In: Cotterill RMJ (ed) Models of brain function. Cambridge Univ. Press, Cambridge, pp 255–272Google Scholar
- 10.Gao X, Zhang H, Chen H, Li J (2015) Multi-modal image fusion based on ROI and Laplacian Pyramid, Proc. SPIE 9443, Sixth International Conference on Graphic and Image Processing (ICGIP 2014), 94431AGoogle Scholar
- 14.Ji X, Zhang G (2015) Image fusion method of SAR and infrared image based on curvelet transform with adaptive weighting. Multimed Tools Appl 76(17):17633–17649Google Scholar
- 34.Yan C, Xie H, Yang D et al (2017, In press) Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell Transp Syst. https://doi.org/10.1109/TITS.2017.2749965
- 35.Yan C, Xie H, Liu S et al (2017, In press) Effective Uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans Intell Transp Syst. https://doi.org/10.1109/TITS.2017.2749965
- 36.Yang B, Li S (2007) Multi-focus image fusion based on spatial frequency and morphological operators. Chin Opt Lett 5(8):452–453Google Scholar