Advertisement

MSDNet for Medical Image Fusion

  • Xu Song
  • Xiao-Jun WuEmail author
  • Hui Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

Considering the DenseFuse only works in a single scale, we propose a multi-scale DenseNet (MSDNet) for medical image fusion. The main architecture of network is constructed by encoding network, fusion layer and decoding network. To utilize features at different scales, we add a multi-scale mechanism which uses three filters of different sizes to extract features in encoding network. More image details are obtained by increasing the encoding network’s width. Then, we adopt fusion strategy to fuse features of different scales respectively. Finally, the fused image is reconstructed by decoding network. Compared with the existing methods, the proposed method can achieve state-of-the-art fusion performance in objective and subjective assessment.

Keywords

Medical image fusion Multi-scale Dense block Deep learning 

References

  1. 1.
    Du, J., Li, W., Lu, K., et al.: An overview of multi-modal medical image fusion. Neurocomputing 215, 3–20 (2016)CrossRefGoogle Scholar
  2. 2.
    Liu, Y., Chen, X., Cheng, J., et al.: A medical image fusion method based on convolutional neural networks. In: 2017 20th International Conference on Information Fusion (Fusion), pp. 1–7. IEEE (2017)Google Scholar
  3. 3.
    Yang, S., Wang, M., Jiao, L., et al.: Image fusion based on a new contourlet packet. Inf. Fusion 11(2), 78–84 (2010)CrossRefGoogle Scholar
  4. 4.
    Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor image fusion using the wavelet transform. Graph. Models Image Process. 57(3), 235–245 (1995)CrossRefGoogle Scholar
  5. 5.
    Guo, L., Dai, M., Zhu, M.: Multifocus color image fusion based on quaternion curvelet transform. Opt. Express 20(17), 18846–18860 (2012)CrossRefGoogle Scholar
  6. 6.
    Liu, Y., Wang, Z.: Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Proc. 9(5), 347–357 (2014)CrossRefGoogle Scholar
  7. 7.
    Yin, H., Li, Y., Chai, Y., et al.: A novel sparse-representation-based multi-focus image fusion approach. Neurocomputing 216, 216–229 (2016)CrossRefGoogle Scholar
  8. 8.
    Li, H., Wu, X.-J.: Multi-focus image fusion using dictionary learning and low-rank representation. In: Zhao, Y., Kong, X., Taubman, D. (eds.) ICIG 2017. LNCS, vol. 10666, pp. 675–686. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-71607-7_59CrossRefGoogle Scholar
  9. 9.
    Liu, Y., Chen, X., Peng, H., et al.: Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 36, 191–207 (2017)CrossRefGoogle Scholar
  10. 10.
    Li, H., Wu, X.J., Kittler, J.: Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2705–2710. IEEE (2018)Google Scholar
  11. 11.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  12. 12.
    Li, H., Wu, X.J.: DenseFuse: a fusion approach to infrared and visible images. IEEE Trans. Image Process. 28(5), 2614–2623 (2019)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Huang, G., Liu, Z., Van Der Maaten, L., et al.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)Google Scholar
  14. 14.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  15. 15.
    He, C., Liu, Q., Li, H., et al.: Multimodal medical image fusion based on IHS and PCA. Procedia Eng. 7, 280–285 (2010)CrossRefGoogle Scholar
  16. 16.
    Xu, Z.: Medical image fusion using multi-level local extrema. Inf. Fusion 19, 38–48 (2014)CrossRefGoogle Scholar
  17. 17.
    Du, J., Li, W., Xiao, B.: Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Trans. Image Process. 26(12), 5855–5866 (2017)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Yin, M., Liu, X., Liu, Y., et al.: Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 99, 1–16 (2018)Google Scholar
  19. 19.
    Haghighat, M., Razian, M.A.: Fast-FMI: non-reference image fusion metric. In: 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–3. IEEE (2014)Google Scholar
  20. 20.
  21. 21.
    Prabhakar, K.R., Srikar, V.S., Babu, R.V.: DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: ICCV, pp. 4724–4732 (2017)Google Scholar
  22. 22.
    Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  23. 23.
    Chen, Z., Wu, X.J., Kittler, J.: A sparse regularized nuclear norm based matrix regression for face recognition with contiguous occlusion. Pattern Recogn. Lett. (2019)Google Scholar
  24. 24.
    Chen, Z., Wu, X.-J., Yin, H.-F., Kittler, J.: Robust low-rank recovery with a distance-measure structure for face recognition. In: Geng, X., Kang, B.-H. (eds.) PRICAI 2018. LNCS (LNAI), vol. 11013, pp. 464–472. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-97310-4_53CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, School of IoT EngineeringJiangnan UniversityWuxiChina

Personalised recommendations