Fusion of multi-exposure images using recursive and Gaussian filter

  • Vishal ChaudharyEmail author
  • Vinay Kumar


This paper proposes a novel technique to create a high-resolution image by combining the bracketed exposure sequence without a priori knowledge of source image. The source image is split into three categories: constant, high varying and low varying feature images. For high and low varying features, pixels with highest information is selected and combined to construct collective high and low varying feature image. Collective constant feature image is constructed from weighted average of constant feature images, where weight is calculated based on information present in original source images. These pre-processed high, low and constant feature images are further combined to produce a final fused image. Objective analysis based quality evaluation parameters show a significant improvement in result produced by proposed method against the state-of-the-art.


Dynamic range Exposure time Recursive filter Image fusion 



  1. Bhoskar, S., & Kakarala, R. (2016). Novel approach to detect HDR scenes and determine suitable frames for image fusion. In International symposium on electronic imaging (pp. 1–8).Google Scholar
  2. Dong, X., Shen, J., & Shao, L. (2017). Hierarchical superpixel-to-pixel dense matching. IEEE Transaction on Circuits and Systems for Video Technology, 27(12), 2518–2526.CrossRefGoogle Scholar
  3. Gastal, E. S. L., & Oliveira, M. M. (2011). Domain transform for edge aware image and video processing. ACM Transactions on Graphics, 30(4), 69.CrossRefGoogle Scholar
  4. Goshtasby, A. A. (2005). Fusion of multi-exposure images. Image and Vision Computing, 23(6), 611–618.CrossRefGoogle Scholar
  5. Han, Y., Cai, Y., Cao, Y., & Xu, X. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.CrossRefGoogle Scholar
  6. Hoosny, M., Nahavandi, S., & Creighton, D. (2008). Comments on information measure for performance of image fusion. Electronics Letters, 44(18), 1066–1067.CrossRefGoogle Scholar
  7. Jo, K. H., & Vavilin, A. (2011). HDR image generation based on intensity clustering and local feature analysis. Computers in Human Behavior, 27(5), 1507–1511.CrossRefGoogle Scholar
  8. Kim, K., Bae, J., & Kim, J. (2011). Natural HDR image tone mapping based on retinex. IEEE Transaction Consumer Electronics, 57(4), 1807–1814.CrossRefGoogle Scholar
  9. Kotwal, K., & Chaudhuri, S. (2011).An optimization-based approach to fusion of multi-exposure, low dynamic range images. In IEEE international conference on information fusion.Google Scholar
  10. Kuang, J., Johnson, G. M., & Fairchild, M. D. (2007). iCAM06: A refined image appearance model for HDR image rendering. Journal of Visual Communication and Image Representation, 18(5), 406–414.CrossRefGoogle Scholar
  11. Lee, J. W., Park, R. H., & Chang, S. (2010). Tone mapping using color correction function and image decomposition in high dynamic range imaging. IEEE Transaction on Consumer Electronics, 56(4), 2772–2780.CrossRefGoogle Scholar
  12. Li, S., & Kang, X. (2012). Fast multi-exposure image fusion with median filter and recursive filter. IEEE Transactions on Consumer Electronics, 58(2), 626–632.CrossRefGoogle Scholar
  13. Li, X., Li, F., Zhuo, L., & Feng, D. D. (2013). Layered-based exposure fusion algorithm. IET Image Processing, 7(7), 701–711.CrossRefGoogle Scholar
  14. Li, Z., Zheng, J., Zhu, Z., Yao, W., & Wu, S. (2015). Weighted guided image filtering. IEEE Transaction on Image Processing, 24(1), 120–129.MathSciNetCrossRefzbMATHGoogle Scholar
  15. Liu, Y., & Wang, Z. (2015). Dense SIFT for ghost free multi-exposure fusion. Journal of Visual Communication and Image Representation, 31, 208–224.CrossRefGoogle Scholar
  16. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., & Zhang, L. (2017). Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Transactions on Image Processing, 26(5), 2519–2532.MathSciNetCrossRefzbMATHGoogle Scholar
  17. Mertens, T., Kautz, J., & Reeth, F. V. (2007). Exposure fusion. In IEEE pacific conference on computer graphics and applications (pp. 382–390).Google Scholar
  18. Monobe, Y., Yamashita, H., Kurosawa, T., & Kotera, H. (2005). Dynamic range compression preserving local image contrast for digital video camera. IEEE Transaction on Consumer Electronics, 51(1), 1–10.CrossRefGoogle Scholar
  19. Paul, S., Sevcenco, I. S., & Agathoklis, P. (2016). Multi-exposure and multi-focus image fusion in gradient domain. Journal of Circuits, Systems and Computers, 25(10), 1650123.CrossRefGoogle Scholar
  20. Qin, X., Shen, J., Mao, X., Li, X., & Jia, Y. (2015). Robust match fusion using optimization. IEEE Transaction on Cybernetics, 45(8), 1549–1560.CrossRefGoogle Scholar
  21. Qu, G., Zhang, D., & Yan, P. (2002). Information measure for performance of image fusion. Electronics Letters, 38(7), 313–315.CrossRefGoogle Scholar
  22. Raman, S., & Chaudhuri, S. (2009). Bilateral filter based composition for variable exposure photography. In Proceedings of eurographics.Google Scholar
  23. Richardson, I. E. (2010). The H.264 advanced compression standard. New York: Wiley.CrossRefGoogle Scholar
  24. Shan, Q., Jia, J., & Brown, M. S. (2009). Globally optimized linear windowed tone mapping. IEEE Transaction on Visualization and Computer Graphics, 16(4), 663–675.CrossRefGoogle Scholar
  25. Shen, R., Cheng, I., & Basu, A. (2013). QoE-based multi-exposure image fusion in hierarchical multivariate Gaussian CRF. IEEE Transaction on Image Processing, 22(6), 2469–2478.CrossRefzbMATHGoogle Scholar
  26. Shen, R., Cheng, I., Shi, J., & Basu, A. (2011). Generalized random walks for fusion of multi-exposure images. IEEE Transaction on Image Processing, 20(12), 3634–3646.MathSciNetCrossRefzbMATHGoogle Scholar
  27. Shen, J., Hao, X., Liang, Z., Wang, W., & Shao, L. (2016). Real-time superpixel segmentation by DBSCAN clustering algorithm. IEEE Transaction on Image Processing, 25(12), 5933–5942.MathSciNetCrossRefzbMATHGoogle Scholar
  28. Shen, J., Zhao, Y., Yan, S., & Li, X. (2014). Exposure fusion using boosting Laplacian pyramid. IEEE Transactions on Cybernetics, 44(9), 1579–1590.CrossRefGoogle Scholar
  29. Song, M., Tao, D., Chen, C., Bu, J., Luo, J., & Zhang, C. (2012). Probabilistic exposure fusion. IEEE Transaction on Image Processing, 21(1), 341–357.MathSciNetCrossRefzbMATHGoogle Scholar
  30. Varkonyi-Koczy, A. R., Rovid, A., & Hashimoto, T. (2008). Gradient-based synthesized multiple exposure time colour HDR image. IEEE Transaction on Instrumentation and Measurement, 57(8), 1779–1785.CrossRefGoogle Scholar
  31. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRefGoogle Scholar
  32. Wang, J., Feng, S., & Bao, Q. (2010). Pyramidal dual-tree directional filter bank based exposure fusion for two complementary images. In IEEE international conference on signal processing proceedings (pp. 1082–1085).Google Scholar
  33. Wang, W., Shen, J., Li, X., & Porikli, F. (2015). Robust video object cosegmentation. IEEE Transaction on Image Processing, 24(10), 3137–3148.MathSciNetCrossRefzbMATHGoogle Scholar
  34. Wang, W., Shen, J., Yang, R., & Porikli, F. (2018). Saliency-aware video object segmentation. IEEE Transaction on Pattern Analysis and Machine Intelligence, 40(1), 20–33.CrossRefGoogle Scholar
  35. Xydes, C. S., & Pertovic, V. (2000). Objective image fusion performance measure. Electronics Letters, 36(4), 308–309.CrossRefGoogle Scholar
  36. Yu, M., Song, E., Jin, R., Liu, H., Xu, X., & Ma, G. (2015). A novel method for fusion of differently exposed images based on spatial distribution of intensity for ubiquitous multimedia. Multimedia Tools and Applications, 74(8), 2745–2761.CrossRefGoogle Scholar
  37. Zhang, W., & Cham, W. K. (2012). Gradient-directed multiexposure composition. IEEE Transactions on Image Processing, 21(4), 2318–2323.MathSciNetCrossRefzbMATHGoogle Scholar
  38. Zhao, H., Shang, Z., Tang, Y. Y., & Fang, B. (2013). Multi-focus image fusion based on the neighbor distance. Pattern Recognition, 46(3), 1002–1011.CrossRefGoogle Scholar
  39. Zhou, Z., Wang, B., Li, S., & Dong, M. (2016). Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral Filters. Information Fusion, 30, 15–26.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electronics and Communication EngineeringThapar Institute of Engineering and TechnologyPatialaIndia

Personalised recommendations