Image Fusion Based on Convolutional Sparse Representation with Mask Decoupling

  • Chengfang Zhang
  • Yuling Chen
  • Liangzhong Yi
  • Xingchun Yang
  • Xin Jin
  • Dan YanEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1060)


Convolutional sparse representation (CSR) has been successfully applied to image fusion to solve the problem of limited ability in detail preservation and high sensitivity to misregistration in SR-based fusion. CSR-based fusion method is more time-efficient than SR-based, but potential image boundary artifacts created by CSR-based fusion approach due to periodic boundary conditions. To avoid boundary artifacts, mask decoupling (abbreviated as MD when convenient) technique is introduced into convolutional sparse coding and dictionary learning and is applied to image fusion in this paper. First, dictionary filters are learned by using convolutional BPDN dictionary learning with mask decoupling from USC-SIPI image database. Second, we propose CSRMD-based fusion method for multi-focus and multi-modal image. Experimental results demonstrate that our algorithm avoids artifacts and outperforms existing methods in terms of subjective visual effects and objective evaluation criteria. Compared with CSR-based method, \(Q^{\text{ABF}}\), \(Q^{e}\), \(Q^{p}\) and \(Q^{\text{CB}}\) increased by 3.31, 3.76, 3.05 and 3.15% averagely.


Image fusion Convolutional sparse coding Convolutional BPDN dictionary learning Mask decoupling Boundary artifacts 



The authors would like to thank the editors and anonymous reviewers for their detailed review, valuable comments and constructive suggestions. This work is supported by the National Natural Science Foundation of China (Grants 61372187), Research and Practice on Innovation of Police Station Work Assessment System (Grants 18RKX1034) and Sichuan Science and Technology Program (2019YFS0068 and 2019YFS0069).


  1. 1.
    Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion. 2011;12(2):74–84.CrossRefGoogle Scholar
  2. 2.
    Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion. 2012;12(2):74–84.CrossRefGoogle Scholar
  3. 3.
    Yang B, Li S. Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Measur. 2010;59(4):884–92.CrossRefGoogle Scholar
  4. 4.
    Chengfang Z, Liangzhong Y, Ziliang F, Zhisheng G, Xin J, Dan Y. Multimodal image fusion with adaptive joint sparsity model. J Electr Imag. 2019;28(1):013043.Google Scholar
  5. 5.
    Gao Z, Zhang C. Texture clear multi-modal image fusion with joint sparsity model. Optik—Int J Light Electron Opt. 2016;130:255.CrossRefGoogle Scholar
  6. 6.
    Wohlberg B. Efficient algorithms for convolutional sparse representation. IEEE Trans Image Process. 2016;25(1):301–15.MathSciNetCrossRefGoogle Scholar
  7. 7.
    Liu Y, Chen X, Ward RK. Image fusion with convolutional sparse representation. IEEE Signal Process Lett. 2016;23(12):1882–6.CrossRefGoogle Scholar
  8. 8.
    Wohlberg B. Boundary handling for convolutional sparse representations. In: IEEE international conference on image processing. IEEE; 2016.Google Scholar
  9. 9.
    Liu Z, Blasch E, Xue Z, Zhao J, Laganire R, Wu W. Objective assessment of multi resolution image fusion algorithms for context enhancement in night vision: a comparative study. IEEE Trans Patt Anal Mach Intell. 2011;34(1):94–109.CrossRefGoogle Scholar
  10. 10.
    Xydeas CS, Petrovic V. Objective image fusion performance measure. Milit Tech Courier. 2000;56(2):181–93.Google Scholar
  11. 11.
    Piella G, Heijmans H. A new quality metric for image fusion. In: International conference on image processing; 2003. p. 173–76.Google Scholar
  12. 12.
    Zhao J, Laganiere R, Liu Z. Performance assessment of combinative pixellevel image fusion based on an absolute feature measurement. Int J Innov Comput Inf Contr. 2006;3(6):1433–47.Google Scholar
  13. 13.
    Chen Y, Blum RS. A new automated quality assessment algorithm for image fusion. Butterworth-Heinemann. 2009;27(10):1421–32.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Chengfang Zhang
    • 1
  • Yuling Chen
    • 1
  • Liangzhong Yi
    • 1
  • Xingchun Yang
    • 1
  • Xin Jin
    • 1
  • Dan Yan
    • 1
    Email author
  1. 1.Sichuan Police CollegeLuzhouChina

Personalised recommendations