Abstract
Creation of virtual reality models from photographs is very complex and time-consuming process, that requires special equipment like laser scanners, a large number of photographs and manual interaction. In this work we present a method for generating of surface geometry of photographed scene. Our approach is based on the phenomenon of shallow depth-of-field in close-up photography. Representing such surface details is useful to increase the visual realism in a range of application areas, especially biological structures or microorganisms.
For testing purposes a set of images of the same scene is taken from a typical digital camera with macro lenses with a different depth-of-field. Our new image fusion method employs discrete Fourier transform to designate sharp regions in this set of images, combine them together into a fully focused image and finally produce a height field map. Further image processing algorithms approximate three dimensional surface using this height field map and a fused image. Experimental results show that our method works for wide range of cases and gives a good tool for acquiring surfaces from a few photographs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Burt, P.J.: The pyramid as a structure for efficient computation, Multiresolution Image Processing and Analysis, pp. 6–35. Springer, Heidelberg (1984)
Toet, A.: Image fusion by rati of low-pass pyramid. Pattern Recognition Letters 9(4), 245–253 (1989)
Ishita, D., Bhabatosh, C., Buddhajyoti, C.: Enhancing effective depth-of-field by image fusion using mathematical morphology. Image and Vision Computing 24, 1278–1287 (2006)
Mukopadhyay, S., Chanda, B.: Fusion of 2d gray scale images using multiscale morphology. Pattern Recognition 34, 1939–1949 (2001)
Matsopoulos, G.K., Marshall, S., Brunt, J.N.M.: Multiresolution morphological fusion of mr and ct images of the human brain. IEEE Proceedings Vision, Image and Signal Processing 141(3), 137–142 (1994)
Li, H., Manjunath, H., Mitra, S.: Multisensor image fusion using the wavelet transform. Graphical Models and Image Processing 57(3), 235–245 (1995)
Chibani, Y., Houacine, A.: Redundant versus orthogonal wavelet decomposition for multisensor image fusion. Pattern Recognition 36, 879–887 (2003)
Lewis, L.J., O’Callaghan, R., Nikolov, S.G., Bull, D.R., Canagarajah, N.: Pixel- and region-based image fusion with complex wavelets. Information Fusion 8, 119–130 (2007)
Ajjimarangsee, P., Huntsberger, T.L.: Neural network model for fusion of visible and infrared sensor outputs. In: Sensor Fusion, Spatial Reasoning and Scene Interpretation, The International Society for Optical Engineering, SPIE, Bellingham, USA, vol. 1003, pp. 152–160 (1988)
Goshtasby, A.A.: Guest editorial: Image fusion: Advances in the state of the art. Information Fusion 8, 114–118 (2007)
Piella, G.: A general framework for multiresolution image fusion: from pixels to regions. Information Fusion 4, 259–280 (2003)
Wiliams, M.L., Wilson, R.C., Hancock, E.R.: Deterministic search for relational graph matching. Pattern Recognition 32, 1255–1516 (1999)
Constant, A.: Close-up Photography. Butterworth-Heinemann, Butterworth (2000)
Denkowski, M., Chlebiej, M., Mikołajczak, P.: Modeling of 3D Scene Based on Series of Photographs Taken with Different Depth-of-Field. In: Bubak, M., et al. (eds.) ICCS 2008, Part II. LNCS, vol. 5102, pp. 25–34. Springer, Heidelberg (2008)
Denkowski, M., Chlebiej, M., Mikołajczak, P.: Three-Dimensional Model Generation Based on Depth-of-Field Image Fusion. Polish Journal of Environmental Studies 17(3B), 78–82 (2008)
Bogoni, L., Hansen, M.: Pattern-selective color image fusion. Pattern Recognition 34, 1515–1526 (2001)
Gonzalez, R.C., Woods, R.E.: Digital image processing. Addison-Wesley Publishing Company, Inc, Reading (1992)
Bertalmío, M., Bertozzi, A.L., Sapiro, G.: Navier-Stokes, Fluid Dynamics, and Image and Video Inpainting. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), vol. 1 (2001)
Tomasi, C., Manduchi, R.: Bilateral Filtering for Gray and Color Images. In: Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India (1998)
Garland, M., Heckbert, P.S.: Fast Polygonal Approximations of Terrain and Height Fields, Technical Report CMU-CS-95-181, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213 (1995)
Denkowski, M., Chlebiej, M., Mikołajczak, P.: Development of the cross-platform framework for the medical image processing. Annales UMCS, Sectio AI Informatica III, 159–167 (2005)
Xydeas, C., Petrović, V.: Objective image fusion performance measure. Electronics Letters 36(4), 308–309 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Denkowski, M., Chlebiej, M., Mikołajczak, P. (2009). A New Image Fusion Method for Estimating 3D Surface Depth. In: Bolc, L., Kulikowski, J.L., Wojciechowski, K. (eds) Computer Vision and Graphics. ICCVG 2008. Lecture Notes in Computer Science, vol 5337. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02345-3_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-02345-3_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02344-6
Online ISBN: 978-3-642-02345-3
eBook Packages: Computer ScienceComputer Science (R0)