Advertisement

CNNs hard voting for multi-focus image fusion

  • Mostafa Amin-Naji
  • Ali AghagolzadehEmail author
  • Mehdi Ezoji
Original Research
  • 17 Downloads

Abstract

The main idea of image fusion is gathering the necessary features and information into one image. The multi-focus image fusion process is gathering this information from the focused areas of many images and the ideal fused image have all focused part from the input images. There are many studies of multi-focus image fusion in the spatial and transform domains. Recently the multi-focus image fusion methods based on deep learning have been emerged, and they have enhanced the decision map greatly. Nevertheless, the construction of an ideal initial decision map is still difficult and inaccessible. Therefore, the previous methods have high dependency on vast post-processing algorithms. This paper proposes a new convolution neural networks (CNNs) based on ensemble learning for multi-focus image fusion. This network uses hard voting of three branches CNNs that each branch is trained on three different datasets. It is very reasonable and reliable to use various models and datasets instead of just one and it would help to the network for improving the accuracy of classification. In addition, this paper introduces new simple arranging of the patches of the multi-focus datasets that is very useful in obtaining better classification accuracy. With this new arrangement of datasets, three types of multi-focus datasets are created with the help of gradient in the directions of vertical and horizontal. This paper illustrates that the initial segmented decision map of the proposed method is very cleaner than the others, and even it is cleaner than the other final decision maps after refined with a lot of post-processing algorithms. The conducted experimental results and analysis evidently validate that the proposed network have the cleanest initial decision map and the best quality of the output fused image compared to the other state of the art methods. These comparisons are performed with various qualitative and quantitative assessments that are assessed by several fusion metrics for demonstrating the superiority of the proposed network.

Keywords

Multi-focus Image fusion Hard voting Ensemble learning Deep learning Convolution neural networks 

Notes

References

  1. Amin-Naji M, Aghagolzadeh A (2017a) Block DCT filtering using vector processing. arXiv preprint arXiv:1710.07193Google Scholar
  2. Amin-Naji M, Aghagolzadeh A (2017b) Multi-focus image fusion using VOL and EOL in DCT domain. arXiv preprint arXiv:1710.06511Google Scholar
  3. Amin-Naji M, Aghagolzadeh A (2018) Multi-focus image fusion in DCT domain using variance and energy of Laplacian and correlation coefficient for visual sensor networks. J Artif Intell Data Min 6:233–250.  https://doi.org/10.22044/jadm.2017.5169.1624 Google Scholar
  4. Amin-Naji M, Ranjbar-Noiey P, Aghagolzadeh A (2017) Multi-focus image fusion using singular value decomposition in DCT domain. In: 10th Iranian conference on machine vision and image process (MVIP). IEEE, New York, pp 45–51.  https://doi.org/10.1109/IranianMVIP.2017.8342367
  5. Bavirisetti DP (2019) https://sites.google.com/view/durgaprasadbavirisetti/datasets. Accessed 6 Jan 2019
  6. Bavirisetti DP, Dhuli R (2016) Multi-focus image fusion using multi-scale image decomposition and saliency detection. Ain Shams Eng J.  https://doi.org/10.1016/j.asej.2016.06.011 Google Scholar
  7. Cao L, Jin L, Tao H, Li G, Zhuang Z, Zhang Y (2015) Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Process Lett 22:220–224.  https://doi.org/10.1109/LSP.2014.2354534 CrossRefGoogle Scholar
  8. Chen Y, Blum RS (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27:1421–1432.  https://doi.org/10.1016/j.imavis.2007.12.002 CrossRefGoogle Scholar
  9. CNN from Wikipedia, the free encyclopedia (2019) http://cs231n.github.io/convolutional-networks/. Accessed 6 Jan 2019
  10. Cvejic N, Canagarajah C, Bull D (2006) Image fusion metric based on mutual information and Tsallis entropy. Electron Lett 42:626–627.  https://doi.org/10.1049/el:20060693 CrossRefGoogle Scholar
  11. Deep Learning from Wikipedia, the free encyclopedia (2019) https://en.wikipedia.org/wiki/Deep_learning. Accessed 6 Jan 2019
  12. Dietterich TG (2000) Ensemble methods in machine learning. In: International workshop on multiple classifier system. Springer, New York, pp 1–15.  https://doi.org/10.1007/3-540-45014-9_1
  13. Dogra A, Goyal B, Agrawal S (2017) From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications. IEEE Access 5:16040–16067.  https://doi.org/10.1109/ACCESS.2017.2735865 CrossRefGoogle Scholar
  14. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761.  https://doi.org/10.1109/ACCESS.2017.2735019 CrossRefGoogle Scholar
  15. Du C-b, Gao S-s (2018) Multi-focus image fusion with the all convolutional neural network. Optoelectron Lett 14:71–75.  https://doi.org/10.1007/s11801-018-7207-x CrossRefGoogle Scholar
  16. Ghassemian H (2016) A review of remote sensing image fusion methods. Inf Fusion 32:75–89.  https://doi.org/10.1016/j.inffus.2016.03.003 CrossRefGoogle Scholar
  17. Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning. MIT Press, CambridgezbMATHGoogle Scholar
  18. Guo D (2019) http://en.pudn.com/Download/item/id/2678955.html. Accessed 6 Jan 2019
  19. Guo D, Yan J, Qu X (2015) High quality multi-focus image fusion using self-similarity and depth information. Opt Commun 338:138–144.  https://doi.org/10.1016/j.optcom.2014.10.031 CrossRefGoogle Scholar
  20. Guo X, Nie R, Cao J, Zhou D, Qian W (2018) Fully Convolutional network-based multifocus image fusion. Neural Comput 30:1775–1800.  https://doi.org/10.1162/neco_a_01098 MathSciNetCrossRefGoogle Scholar
  21. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) Multi-focus image fusion for visual sensor networks in DCT domain. Comput Electr Eng 37:789–797.  https://doi.org/10.1016/j.compeleceng.2011.04.016 CrossRefzbMATHGoogle Scholar
  22. Hossny M, Nahavandi S, Creighton D (2008) Comments on ‘Information measure for performance of image fusion’. Electron Lett 44:1066–1067.  https://doi.org/10.1049/el:20081754 CrossRefGoogle Scholar
  23. Huang W, Jing Z (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recognit Lett 28:493–500.  https://doi.org/10.1016/j.patrec.2006.09.005 CrossRefGoogle Scholar
  24. James AP, Dasarathy BV (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19.  https://doi.org/10.1016/j.inffus.2013.12.002 CrossRefGoogle Scholar
  25. Kang X (2019) http://xudongkang.weebly.com/. Accessed 6 Jan 2019
  26. Krähenbühl P, Koltun V (2011) Efficient inference in fully connected CRFS with Gaussian edge potentials. In: Advances in neural information processing systems, pp 109–117Google Scholar
  27. Kumar BS (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process 7:1125–1143.  https://doi.org/10.1007/s11760-012-0361-x CrossRefGoogle Scholar
  28. Kumar BS (2015) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9:1193–1204.  https://doi.org/10.1007/s11760-013-0556-9 CrossRefGoogle Scholar
  29. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324.  https://doi.org/10.1109/5.726791 CrossRefGoogle Scholar
  30. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436.  https://doi.org/10.1038/nature14539 CrossRefGoogle Scholar
  31. Li H, Manjunath B, Mitra SK (1995) Multisensor image fusion using the wavelet transform. Graph Models Image Process 57:235–245.  https://doi.org/10.1006/gmip.1995.1022 CrossRefGoogle Scholar
  32. Li M, Cai W, Tan Z (2006) A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognit Lett 27:1948–1956.  https://doi.org/10.1016/j.patrec.2006.05.004 CrossRefGoogle Scholar
  33. Li S, Kang X, Hu J (2013a) Image fusion with guided filtering. IEEE Trans Image Process 22:2864–2875.  https://doi.org/10.1109/TIP.2013.2244222 CrossRefGoogle Scholar
  34. Li S, Kang X, Hu J, Yang B (2013b) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14:147–162.  https://doi.org/10.1016/j.inffus.2011.07.001 CrossRefGoogle Scholar
  35. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112.  https://doi.org/10.1016/j.inffus.2016.05.004 CrossRefGoogle Scholar
  36. Liang J, He Y, Liu D, Zeng X (2012) Image fusion using higher order singular value decomposition. IEEE Trans Image Process 21:2898–2909.  https://doi.org/10.1109/TIP.2012.2183140 MathSciNetCrossRefzbMATHGoogle Scholar
  37. Liu Y (2019b) http://www.escience.cn/people/liuyu1/Codes.html. Accessed 6 Jan 2019
  38. Liu Z (2019a) https://github.com/zhengliu6699/imageFusionMetrics. Accessed 6 Jan 2019
  39. Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inf Fusion 23:139–155.  https://doi.org/10.1016/j.inffus.2014.05.004 CrossRefGoogle Scholar
  40. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207.  https://doi.org/10.1016/j.inffus.2016.12.001 CrossRefGoogle Scholar
  41. Liu Y, Chen X, Wang Z, Wang ZJ, Ward RK, Wang X (2018a) Deep learning for pixel-level image fusion: recent advances and future prospects. Inf Fusion 42:158–173.  https://doi.org/10.1016/j.inffus.2017.10.007 CrossRefGoogle Scholar
  42. Liu C, Long Y, Mao J, Zhang H, Huang R, Dai Y (2018b) An effective image fusion algorithm based on grey relation of similarity and morphology. J Ambient Intell Hum Comput.  https://doi.org/10.1007/s12652-018-0873-5 Google Scholar
  43. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153–178.  https://doi.org/10.1016/j.inffus.2018.02.004 CrossRefGoogle Scholar
  44. Maji D, Santara A, Mitra P, Sheet D (2016) Ensemble of deep convolutional neural networks for learning to detect retinal vessels in fundus images. arXiv preprint arXiv:160304833Google Scholar
  45. Microsoft COCO Dataset (2019) http://cocodataset.org/. Accessed 6 Jan 2019
  46. Naji MA, Aghagolzadeh A (2015a) A new multi-focus image fusion technique based on variance in DCT domain. In: 2nd international conference on knowledge-based engineering and innovation (KBEI). IEEE, New York, pp 478–484.  https://doi.org/10.1109/KBEI.2015.7436092
  47. Naji MA, Aghagolzadeh A (2015b) Multi-focus image fusion in DCT domain based on correlation coefficient. In: 2nd international conference on knowledge-based engineering and innovation (KBEI). IEEE, New York, pp 632–639.  https://doi.org/10.1109/KBEI.2015.7436118
  48. Nejati M, Lytro Multi-focus Dataset (2019) https://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset. Accessed 6 Jan 2019
  49. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fusion 25:72–84.  https://doi.org/10.1016/j.inffus.2014.10.004 CrossRefGoogle Scholar
  50. Nejati M, Samavi S, Karimi N, Soroushmehr SR, Shirani S, Roosta I, Najarian K (2017) Surface area-based focus criterion for multi-focus image fusion. Inf Fusion 36:284–295.  https://doi.org/10.1016/j.inffus.2016.12.009 CrossRefGoogle Scholar
  51. Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169–198.  https://doi.org/10.1613/jair.614 CrossRefzbMATHGoogle Scholar
  52. Pertuz S, Puig D, Garcia MA (2013) Analysis of focus measure operators for shape-from-focus. Pattern Recognit 46:1415–1432.  https://doi.org/10.1016/j.patcog.2012.11.011 CrossRefzbMATHGoogle Scholar
  53. Petrovic VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13:228–237.  https://doi.org/10.1109/TIP.2004.823821 CrossRefzbMATHGoogle Scholar
  54. Petrovic V, Xydeas C (2005) Objective image fusion performance characterisation. In: Tenth IEEE international conference on computer vision, ICCV 2005. IEEE, New York, pp 1866–1871.  https://doi.org/10.1109/ICCV.2005.175
  55. Phamila YAV, Amutha R (2014) Discrete cosine transform based fusion of multi-focus images for visual sensor networks. Signal Process 95:161–170.  https://doi.org/10.1016/j.sigpro.2013.09.001 CrossRefGoogle Scholar
  56. Sheikh H, Bovik A (2006) Image information and visual quality. IEEE Trans Image Process 15:430–444.  https://doi.org/10.1109/TIP.2005.859378 CrossRefGoogle Scholar
  57. Smith MI, Heather JP (2005) A review of image fusion technology in 2005. In: Thermosense XXVII (ed) International Society for Optics and Photonics, pp 29–46.  https://doi.org/10.1117/12.597618
  58. Stathaki T (2011) Image fusion: algorithms and applications. Elsevier, LondonGoogle Scholar
  59. Tang H, Xiao B, Li W, Wang G (2018) Pixel convolutional neural network for multi-focus. Image Fusion Inf Sci 433:125–141.  https://doi.org/10.1016/j.ins.2017.12.043 MathSciNetGoogle Scholar
  60. Wang Y (2019) https://github.com/budaoxiaowanzi/image-fusion. Accessed 6 Jan 2019
  61. Wang P-W, Liu B (2008) A novel image fusion metric based on multi-scale analysis. In: 9th international conference on signal process, ICSP 2008. IEEE, New York, pp 965–968.  https://doi.org/10.1109/ICOSP.2008.4697288
  62. Wang Q, Shen Y, Jin J (2008) Performance evaluation of image fusion techniques. Image Fusion Algorithms Appl 19:469–492CrossRefGoogle Scholar
  63. Wu W, Yang X, Pang Y, Peng J, Jeon G (2013) A multifocus image fusion method by using hidden Markov model. Opt Commun 287:63–72.  https://doi.org/10.1016/j.optcom.2012.08.101 CrossRefGoogle Scholar
  64. Xu K, Qin Z, Wang G, Zhang H, Huang K, Ye S (2018) Multi-focus image fusion using fully convolutional two-stream network for visual sensors. KSII Trans Internet Inf Syst.  https://doi.org/10.3837/tiis.2018.05.019 Google Scholar
  65. Xydeas CA, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36:308–309.  https://doi.org/10.1049/el:20000267 CrossRefGoogle Scholar
  66. Yang C, Zhang J-Q, Wang X-R, Liu X (2008) A novel similarity based quality metric for image fusion. Inf Fusion 9:156–160.  https://doi.org/10.1016/j.inffus.2006.09.001 CrossRefGoogle Scholar
  67. Yang Y, Yang M, Huang S, Que Y, Ding M, Sun J (2017) Multifocus image fusion based on extreme learning machine and human visual system. IEEE Access 5:6989–7000.  https://doi.org/10.1109/ACCESS.2017.2696119 CrossRefGoogle Scholar
  68. Zhang Q, Guo B-l (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process 89:1334–1346.  https://doi.org/10.1016/j.sigpro.2009.01.012 CrossRefzbMATHGoogle Scholar
  69. Zhang Y, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf Fusion 35:81–101.  https://doi.org/10.1016/j.inffus.2016.09.006 CrossRefGoogle Scholar
  70. Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf Fusion 8:177–192.  https://doi.org/10.1016/j.inffus.2005.04.003 CrossRefGoogle Scholar
  71. Zhiqiang Z (2019) http://www.pudn.com/Download/item/id/3068967.html. Accessed 6 Jan 2019
  72. Zhou Z-H, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137:239–263.  https://doi.org/10.1016/S0004-3702(02)00190-X MathSciNetCrossRefzbMATHGoogle Scholar
  73. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion 20:60–72.  https://doi.org/10.1016/j.inffus.2013.11.005 CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Electrical and Computer EngineeringBabol Noshirvani University of TechnologyBabolIran

Personalised recommendations