Skip to main content

A New Image Fusion Method for Estimating 3D Surface Depth

  • Conference paper
Computer Vision and Graphics (ICCVG 2008)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5337))

Included in the following conference series:

Abstract

Creation of virtual reality models from photographs is very complex and time-consuming process, that requires special equipment like laser scanners, a large number of photographs and manual interaction. In this work we present a method for generating of surface geometry of photographed scene. Our approach is based on the phenomenon of shallow depth-of-field in close-up photography. Representing such surface details is useful to increase the visual realism in a range of application areas, especially biological structures or microorganisms.

For testing purposes a set of images of the same scene is taken from a typical digital camera with macro lenses with a different depth-of-field. Our new image fusion method employs discrete Fourier transform to designate sharp regions in this set of images, combine them together into a fully focused image and finally produce a height field map. Further image processing algorithms approximate three dimensional surface using this height field map and a fused image. Experimental results show that our method works for wide range of cases and gives a good tool for acquiring surfaces from a few photographs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Burt, P.J.: The pyramid as a structure for efficient computation, Multiresolution Image Processing and Analysis, pp. 6–35. Springer, Heidelberg (1984)

    Book  Google Scholar 

  2. Toet, A.: Image fusion by rati of low-pass pyramid. Pattern Recognition Letters 9(4), 245–253 (1989)

    Article  MATH  Google Scholar 

  3. Ishita, D., Bhabatosh, C., Buddhajyoti, C.: Enhancing effective depth-of-field by image fusion using mathematical morphology. Image and Vision Computing 24, 1278–1287 (2006)

    Article  Google Scholar 

  4. Mukopadhyay, S., Chanda, B.: Fusion of 2d gray scale images using multiscale morphology. Pattern Recognition 34, 1939–1949 (2001)

    Article  MATH  Google Scholar 

  5. Matsopoulos, G.K., Marshall, S., Brunt, J.N.M.: Multiresolution morphological fusion of mr and ct images of the human brain. IEEE Proceedings Vision, Image and Signal Processing 141(3), 137–142 (1994)

    Article  Google Scholar 

  6. Li, H., Manjunath, H., Mitra, S.: Multisensor image fusion using the wavelet transform. Graphical Models and Image Processing 57(3), 235–245 (1995)

    Article  Google Scholar 

  7. Chibani, Y., Houacine, A.: Redundant versus orthogonal wavelet decomposition for multisensor image fusion. Pattern Recognition 36, 879–887 (2003)

    Article  Google Scholar 

  8. Lewis, L.J., O’Callaghan, R., Nikolov, S.G., Bull, D.R., Canagarajah, N.: Pixel- and region-based image fusion with complex wavelets. Information Fusion 8, 119–130 (2007)

    Article  Google Scholar 

  9. Ajjimarangsee, P., Huntsberger, T.L.: Neural network model for fusion of visible and infrared sensor outputs. In: Sensor Fusion, Spatial Reasoning and Scene Interpretation, The International Society for Optical Engineering, SPIE, Bellingham, USA, vol. 1003, pp. 152–160 (1988)

    Google Scholar 

  10. Goshtasby, A.A.: Guest editorial: Image fusion: Advances in the state of the art. Information Fusion 8, 114–118 (2007)

    Article  Google Scholar 

  11. Piella, G.: A general framework for multiresolution image fusion: from pixels to regions. Information Fusion 4, 259–280 (2003)

    Article  Google Scholar 

  12. Wiliams, M.L., Wilson, R.C., Hancock, E.R.: Deterministic search for relational graph matching. Pattern Recognition 32, 1255–1516 (1999)

    Article  Google Scholar 

  13. Constant, A.: Close-up Photography. Butterworth-Heinemann, Butterworth (2000)

    Google Scholar 

  14. Denkowski, M., Chlebiej, M., Mikołajczak, P.: Modeling of 3D Scene Based on Series of Photographs Taken with Different Depth-of-Field. In: Bubak, M., et al. (eds.) ICCS 2008, Part II. LNCS, vol. 5102, pp. 25–34. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  15. Denkowski, M., Chlebiej, M., Mikołajczak, P.: Three-Dimensional Model Generation Based on Depth-of-Field Image Fusion. Polish Journal of Environmental Studies 17(3B), 78–82 (2008)

    Google Scholar 

  16. Bogoni, L., Hansen, M.: Pattern-selective color image fusion. Pattern Recognition 34, 1515–1526 (2001)

    Article  MATH  Google Scholar 

  17. Gonzalez, R.C., Woods, R.E.: Digital image processing. Addison-Wesley Publishing Company, Inc, Reading (1992)

    Google Scholar 

  18. Bertalmío, M., Bertozzi, A.L., Sapiro, G.: Navier-Stokes, Fluid Dynamics, and Image and Video Inpainting. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 1 (2001)

    Google Scholar 

  19. Tomasi, C., Manduchi, R.: Bilateral Filtering for Gray and Color Images. In: Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India (1998)

    Google Scholar 

  20. Garland, M., Heckbert, P.S.: Fast Polygonal Approximations of Terrain and Height Fields, Technical Report CMU-CS-95-181, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 (1995)

    Google Scholar 

  21. Denkowski, M., Chlebiej, M., Mikołajczak, P.: Development of the cross-platform framework for the medical image processing. Annales UMCS, Sectio AI Informatica III, 159–167 (2005)

    Google Scholar 

  22. Xydeas, C., Petrović, V.: Objective image fusion performance measure. Electronics Letters 36(4), 308–309 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Denkowski, M., Chlebiej, M., Mikołajczak, P. (2009). A New Image Fusion Method for Estimating 3D Surface Depth. In: Bolc, L., Kulikowski, J.L., Wojciechowski, K. (eds) Computer Vision and Graphics. ICCVG 2008. Lecture Notes in Computer Science, vol 5337. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02345-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02345-3_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02344-6

  • Online ISBN: 978-3-642-02345-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics