Skip to main content

Hole Filling for View Synthesis

  • Chapter
  • First Online:
Book cover 3D-TV System with Depth-Image-Based Rendering

Abstract

Depth-image-based depth rendering (DIBR) technique is recognized as a promising tool for supporting advanced 3D video services required in multi-view video (MVV) systems. However, an inherent problem with DIBR is to deal with the newly exposed areas that appear in synthesized views. This occurs when parts of the scene are not visible in every viewpoint, leaving blanks spots, called disocclusions. These disocclusions may grow larger as the distance between cameras increases. This chapter addresses the disocclusion problem in two manners: (1) the preprocessing of the depth data, and (2) the image inpainting of the synthesizing view. To deal with small disocclusions, a hole filling strategy is designed by preprocessing the depth video before DIBR, while for larger disocclusions an inpainting approach is proposed to retrieve the missing pixels by leveraging the given depth information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Hysteresis is used to track the more relevant pixels along the contours. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a non-edge). If the magnitude is above the high threshold, it is made an edge. If the magnitude is between the two thresholds, then it is set to zero unless the pixel is located near an edge detected by the high threshold.

  2. 2.

    For two 2D points \( \left( {u_{1} ,v_{1} } \right) \) and \( \left( {u_{2} ,v_{2} } \right) \) the Minkowski distance of order k is defined as: \( \root{k} \of {{\left| {u_{1} - u_{2} } \right|^{k} - \left| {v_{1} - v_{2} } \right|^{k} }} \)

  3. 3.

    Isophotes are level lines of equal gray levels. Mathematically, the direction of the isophote can be interpreted as \( \nabla^{ \bot } I, \) where \( \nabla^{ \bot } = \left( { - \partial_{y} ,\partial_{x} } \right) \) is the direction of the smallest change.

References

  1. Leonard M Jr. (1997) An Image-based approach to three-dimensional computer graphics. PhD thesis, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA

    Google Scholar 

  2. James Tam W, Guillaume A, Zhang L, Taali M, Ronald R (2004) Smoothing depth maps for improved steroscopic image quality. In: Proceedings of the SPIE international symposium ITCOM on three-dimensional TV, video and display III, Philadelphia, USA, vol 5599, pp 162–172

    Google Scholar 

  3. Zhang L, Tam WJ (2005) Stereoscopic image generation based on depth images for 3D TV. IEEE Trans Broadcast 51(2):191–199 June 2005

    Article  Google Scholar 

  4. Chen W-Y, Chang Y-L, Lin S-F, Ding L-F, Chen L-G (2005) Efficient depth image based rendering with edge dependent depth filter and interpolation. In: Proceedings of the IEEE international conference on multimedia and expo (ICME), pp 1314–1317, 6–8 July 2005

    Google Scholar 

  5. Daribo I, Tillier C, Pesquet-Popescu B (2007) Distance dependent depth filtering in 3D warping for 3DTV. In: Proceedings of the IEEE workshop on multimedia signal processing (MMSP), Crete, Greece, pp 312–315, Oct 2007

    Google Scholar 

  6. Sang-Beom L, Yo-Sung H (2009) Discontinuity-adaptive depth map filtering for 3D view generation. In Proceedings of the 2nd international conference on immersive telecommunications (IMMERSCOM), ICST, Brussels, Belgium. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering)

    Google Scholar 

  7. Tomasi C, Manduchi R (1998) Bilateral filtering for gray and color images. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 839–846

    Google Scholar 

  8. Cheng C-M, Lin S-J, Lai S-H, Yang J-C (2008) Improved novel view synthesis from depth image with large baseline. In: Proceedings of the international conference on pattern recognition, Tampa, Finland, pp 1–4, December 2008

    Google Scholar 

  9. Gangwal OP, Berretty RP (2009) Depth map post-processing for 3D-TV. In: Digest of technical papers international conference on consumer electronics (ICCE), pp 1–2, Jan 2009

    Google Scholar 

  10. Mark WR, McMillan L, Bishop G (1997) Post-rendering 3D warping. In: Proceedings of the symposium on interactive 3D graphics (SI3D), ACM Press, New York, pp 7–16

    Google Scholar 

  11. Zhan-wei L, Ping A, Su-xing L, Zhao-yang Z (2007) Arbitrary view generation based on DIBR. In: Proceedings of the international symposium on intelligent signal processing and communication systems (ISPACS), pp 168–171

    Google Scholar 

  12. Tauber Z, Li Z-N, Drew MS (2007) Review and preview: disocclusion by inpainting for image-based rendering. IEEE Trans Syst Man Cybern Part C Appl Rev 37(4):527–540

    Article  Google Scholar 

  13. Ge Y, Fitzpatrick JM (1996) On the generation of skeletons from discrete euclidean distance maps. IEEE Trans Pattern Anal Mach Intell 18(11):1055–1066

    Article  Google Scholar 

  14. Fehn C, Schüür K, Feldmann I, Kauff P, Smolic A.(2002) Distribution of ATTEST test sequences for EE4 in MPEG 3DAV. ISO/IEC JTC1/SC29/WG11, M9219 doc., Dec 2002

    Google Scholar 

  15. Furht B (2008) Encyclopedia of multimedia, 2nd edn. Springer, NY

    Book  Google Scholar 

  16. Bertalmi M, Sapiro G, Caselles V, Ballester C (2000) Image inpainting. In: Proceedings of the annual conference on computer graphics and interactive techniques (SIGGRAPH), New Orleans, USA, pp 417–424

    Google Scholar 

  17. Levin A, Zomet A, Weiss Y (2003) Learning how to inpaint from global image statistics (2003). In: Proceedings of the IEEE international conference on computer vision (ICCV), vol 1, Nice, France, pp 305–312, Oct 2003

    Google Scholar 

  18. Chen K-Y, Tsung P-K, Lin P-C, Yang H-J, Chen L-G (2010) Hybrid motion/depth-oriented inpainting for virtual view synthesis in multiview applications. In: Proceedings of the true vision--capture, transmission and display of 3D video (3DTV-CON), Tampere, Finland, pp 1–4, June 2010

    Google Scholar 

  19. Ndjiki-Nya P, Koppel M, Doshkov D, Lakshman H, Merkle P, Muller K, Wiegand T (2011) Depth image-based rendering with advanced texture synthesis for 3D video. IEEE Trans Multimedia 13(3):453–465

    Article  Google Scholar 

  20. Criminisi A, Perez P, Toyama K (2004) Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process 13(9):1200–1212

    Article  Google Scholar 

  21. Efros AA, Leung TK (1999) Texture synthesis by non-parametric sampling. In: Proceedings of the IEEE international conference on computer vision (ICCV), vol 2, Kerkyra, Greece, pp 1033–1038, Sept 1999

    Google Scholar 

  22. Oh K-J, Yea S, Ho Y-S (2009) Hole filling method using depth based in-painting for view synthesis in free viewpoint television and 3D video. In: Proceedings of the picture coding symposium (PCS), Chicago, IL, USA, pp 1–4, May 2009

    Google Scholar 

  23. Microsoft Sequence Ballet and Breakdancers (2004) [Online] Available: http://research.microsoft.com/en-us/um/people/sbkang/3dvideodownload/

  24. Lawrence ZC, Bing Kang S, Matthew U, Simon W, Richard S (2004) High-quality video view interpolation using a layered representation. In: Proceedings of the annual conference on computer graphics and interactive techniques (SIGGRAPH), vol 23(3), pp 600–608, Aug 2004

    Google Scholar 

Download references

Acknowledgments

This work is partially supported by the National Institute of Information and Communications Technology (NICT), Strategic Information and Communications R&D Promotion Programme (SCOPE) No.101710002, Grand-in-Aid for Scientific Research No.21200002 in Japan, Funding Program for Next Generation World-Leading Researchers No. LR030 (Cabinet Office, Government Of Japan) in Japan, and the Japan Society for the Promotion of Science (JSPS) Program for Foreign Researchers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ismael Daribo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Daribo, I., Saito, H., Daribo, I., Furukawa, R., Hiura, S., Asada, N. (2013). Hole Filling for View Synthesis. In: Zhu, C., Zhao, Y., Yu, L., Tanimoto, M. (eds) 3D-TV System with Depth-Image-Based Rendering. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-9964-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-9964-1_6

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-9963-4

  • Online ISBN: 978-1-4419-9964-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics