Advertisement

Incomplete 3D for multiview representation and synthesis of video objects

  • Jens-Rainer Ohm
  • Karsten Müller
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1425)

Abstract

This paper introduces a new form of representation for three-dimensional video objects. We have developed a technique to extract disparity and texture data from video objects, that are captured simultaneously with multiple-camera configurations. As a result, we obtain the video object plane as an unwrapped surface of a 31) object, containing all texture data visible from any of the cameras. This texture surface can be encoded like any 2D video object plane, while the 3D information is contained in the associated disparity map. It is then possible to reconstruct different viewpoints from the texture surface by simple disparity-based projection. The merits of the technique are efficient multiview encoding of single video objects, and support for viewpoint adaptation functionality, which is desirable in mixing natural and synthetic images. We have performed experiments with the MPEG-4 video verification model, where the disparity map is encoded by use of the tools provided for grayscale alpha data encoding. Due to its simplicity, the technique is capable for applications with requirement for realtime viewpoint adaptation towards video objects.

Keywords

Texture Surface Camera View Separation Line Video Object Epipolar Line 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J.-R. Ohm and E. IzquierdoM.: “An object-based system for stereoscopic viewpoint synthesis,” IEEE Trans. Circ. Syst. Video Tech., vol. CSVT-7, no.5, pp. 801–811, Oct. 1997Google Scholar
  2. 2.
    E. Izquierdo and M. Ernst: “Motion/disparity analysis for 3D-video-conference applications,” Proc. Intern. Workshop on Stereoscopy and 3-Dimensional Imaging (IWS3DI'95), Santorini, Greece, Sept. 1995Google Scholar
  3. 3.
    O. Faugeras: “Three-Dimensional Computer Vision,” MIT Press, Cambridge, Mass.: 1993Google Scholar
  4. 4.
    J.-R. Ohm et al.: “A realtime hardware system for stereoscopic video-conferencing with viewpoint adaptation,” to appear in Image Communication, special issue on 3D TV, January 1998Google Scholar
  5. 5.
    ISO/IEC JTC1/SC29/WG11: “MPEG-4 video verification model version 8.0,” Document no. N1796, July 1997Google Scholar
  6. 6.
    ISO/IEC JTCI/SC29/WG11: “SNHC Verification Model 3.0,” Document no. N1545, Feb. 1997Google Scholar
  7. 7.
    “The Moving Worlds proposal for VRML 2.0,” submitted by Silicon Graphics in collaboration with Sony and WorldMaker, May 1996Google Scholar
  8. 8.
    J.-R. Ohm, K. Müller, C. Stoffers and S. Kruse: “Incomplete 3D representation of video objects,” ISO/IEC JTCl/SC29/WG11, document no. M2639, Oct. 1997Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Jens-Rainer Ohm
    • 1
  • Karsten Müller
    • 1
  1. 1.Heinrich-Hertz-Institut für Nachrichtentechnik Berlin GmbHBerlinGermany

Personalised recommendations