Skip to main content

Dynamic View Interpolation Without Affine Reconstruction

  • Chapter
Confluence of Computer Vision and Computer Graphics

Part of the book series: NATO Science Series ((ASHT,volume 84))

  • 170 Accesses

Abstract

This chapter presents techniques for view interpolation between two reference views of a dynamic scene captured at different times. The interpolations produced portray one possible physically-valid version of what transpired in the scene during the time between when the two reference views were taken. We show how straight-line object motion, relative to a camera-centered coordinate system, can be achieved, and how the appearance of straight-line object motion relative to the background can be created. The special case of affine cameras is also discussed. The methods presented work with widely-separated, uncalibrated cameras and sparse point correspondences. The approach does not involve finding the camera-to-camera transformation and thus does not implicitly perform affine reconstruction of the scene. For circumstances in which the camera-to-camera transformation can be found, we introduce a vector-space of possible synthetic views that follows naturally from the given reference views. It is assumed that the motion of each object in the original scene consists of a series of rigid translations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Avidan and A. Shashua. Novel view synthesis in tensor space. In Proc. Computer Vision and Pattern Recognition Conf., pages 1034–1040, 1997.

    Google Scholar 

  2. S. Avidan and A. Shashua. Non-rigid parallax for 3D linear motion. In Proc. Image Understanding Workshop, pages 199–201, 1998.

    Google Scholar 

  3. T. Beier and S. Neely. Feature-based image metamorphosis. In Proc. SIGGRAPH’92, pages 35–42, 1992.

    Google Scholar 

  4. C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. In Proc. SIGGRAPH’97, pages 353–360, 1997.

    Google Scholar 

  5. S. E. Chen and L. Williams. View interpolation for image synthesis. In Proc. SIGGRAPH’93, pages 279–288, 1993.

    Google Scholar 

  6. J. C. Clarke and A. Zisserman. Detection and tracking of independent motion. Image and Vision Computing, 14:565–572, 1996.

    Article  Google Scholar 

  7. J. Davis. Mosaics of scenes with moving objects. In Proc. Computer Vision and Pattern Recognition Conf., pages 354–360, 1998.

    Google Scholar 

  8. G. J. Edwards, C. J. Taylor, and T. F. Cootes. Learning to identify and track faces in image sequences. In Proc. Sixth Int. Conf. Computer Vision, pages 317–322, 1998.

    Google Scholar 

  9. O. D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig. In Proc. Second European Conf. Computer Vision, volume 588 of Lecture Notes in Computer Science, pages 563–578. Springer, 1992.

    Google Scholar 

  10. O. D. Faugeras, Q-T. Luong, and S. J. Maybank. Camera selfcalibration: Theory and experiments. In Proc. Second European Conf. Computer Vision, volume 588 of Lecture Notes in Computer Science, pages 321–334. Springer, 1992.

    Google Scholar 

  11. A. W. Fitzgibbon, G. Cross, and A. Zisserman. Automatic 3D model construction for turn-table sequences. In R. Koch and L. Van Gool, editors, Proc. Workshop on 3D Structure from Multiple Images of Large-Scale Environments (SMILE’98), volume 1506 of Lecture Notes in Computer Science, pages 155–170. Springer, 1998.

    Google Scholar 

  12. R. I. Hartley. Projective reconstruction and invariants from multiple images. IEEE Trans. Pattern Analysis and Machine Intelligence, 16(10):1036–1041, 1994.

    Article  Google Scholar 

  13. R. I. Hartley. Self-calibration from multiple views with a rotating camera. In Proc. Third European Conf. Computer Vision, pages 471–478, 1994.

    Google Scholar 

  14. M. Irani, P. Anandan, and S. Hsu. Mosaic based representations of video sequences and their applications. In Proc. Fifth Int. Conf. Computer Vision, pages 605–611, 1995.

    Chapter  Google Scholar 

  15. M. Jagersand. Image based view synthesis of articulated agents. In Proc. Computer Vision and Pattern Recognition Conf., pages 1047–1053, 1997.

    Google Scholar 

  16. K. N. Kutulakos and S. M. Seitz. A theory of shape by space carving. In Proc. Seventh Int. Conf. Computer Vision, pages 307–314, 1999.

    Chapter  Google Scholar 

  17. R. A. Manning and C. R. Dyer. Interpolating view and scene motion by dynamic view morphing. In Proc. Computer Vision and Pattern Recognition Conf., volume 1, pages 388–394, 1999.

    Google Scholar 

  18. L. McMillan and G. Bishop. Plenoptic modeling. In Proc. SIGGRAPW’95, pages 39–46, 1995.

    Google Scholar 

  19. M. Pollefeys. Self-Calibration and Metric 3D Reconstruction from Uncalibrated Image Sequences. PhD thesis, Katholieke Universiteit Leuven, Belgium, 1999.

    Google Scholar 

  20. M. Pollefeys, R. Koch, and L. Van Gool. Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. In Proc. Sixth Int. Conf. Computer Vision, pages 90–95, 1998.

    Google Scholar 

  21. V. S. Ramachandran. Visual perception in people and machines. In A. Blake and T. Troscianko, editors, AI and the Eye, pages 21–77. Wiley, 1990.

    Google Scholar 

  22. S. M. Seitz and C. R. Dyer. View morphing. In Proc. SIGGRAPH’96, pages 21–30, 1996.

    Google Scholar 

  23. S. M. Seitz and C. R. Dyer. Photorealistic scene reconstruction by voxel coloring. Int. J. Computer Vision, 35(2):151–173, 1999.

    Article  Google Scholar 

  24. C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization method. Int. J. Computer Vision, 9(2):137–154, 1992.

    Article  Google Scholar 

  25. S. Ullman. The Interpretation of Visual Motion. MIT Press, Cambridge, Mass., 1979.

    Google Scholar 

  26. Y. Yacoob and L. Davis. Computing spatio-temporal representations of human faces. In Proc. Computer Vision and Pattern Recognition Conf., pages 70–75, 1994.

    Chapter  Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Manning, R.A., Dyer, C.R. (2000). Dynamic View Interpolation Without Affine Reconstruction. In: Leonardis, A., Solina, F., Bajcsy, R. (eds) Confluence of Computer Vision and Computer Graphics. NATO Science Series, vol 84. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-4321-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-94-011-4321-9_7

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-0-7923-6612-6

  • Online ISBN: 978-94-011-4321-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics