Advertisement

Projector-Camera Systems in Entertainment and Art

  • Oliver Bimber
  • Xubo Yang
Chapter

Abstract

Video projectors have evolved tremendously in the last decade. Reduced costs and increasing capabilities (e.g. spatial resolution, brightness, dynamic range, throw-ratio) have led to widespread applications in entertainment, art, visualization and in other areas. In this chapter we summarize fundamental visualization and interaction techniques for projector-camera systems that are being used to display interactive content on everyday surfaces - without the need for optimized canvases. Coded projections and camera feedback allows measurement of the projected light on these complex surfaces and compensates the modulation, while also enabling computer vision based interaction techniques.

Keywords

Point Spread Function Dead Zone Interaction Technique Projection Surface Display Surface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    O. Bimber, D. Iwai, G. Wetzstein, and A. Grundhoefer, “The Visual Computing of Projector-Camera Systems,” Computer Graphics Forum, Vol. 27, No. 8, pp. 2219–2245, 2008.CrossRefGoogle Scholar
  2. 2.
    G. Wetzstein and O. Bimber, “Radiometric Compensation through Inverse Light Transport,” Proceedings of Pacific Graphics, pp. 391–399, 2007.Google Scholar
  3. 3.
    J. Salvi, J. Pagés, and J. Batlle, “Pattern Codification Strategies in Structured Light Systems,” Pattern Recognition, Vol. 37, No.4, pp.827–849, 2004.CrossRefGoogle Scholar
  4. 4.
    R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin, and H. Fuchs, “The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays,” Proceedings of ACM SIGGRAPH, pp. 179–188, 1998.Google Scholar
  5. 5.
    O. Bimber, A. Emmering, and T. Klemmer, “Embedded Entertainment with Smart Projectors,” IEEE Computer, Vol.38, No.1, pp.56–63, 2005.Google Scholar
  6. 6.
    S. K. Nayar, H. Peri, M. D. Grossberg, and P. N. Belhumeur, “A Projection System with Radiometric Compensation for Screen Imperfections,” Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2003.Google Scholar
  7. 7.
    T. Yoshida, C. Horii, and K. Sato, “A Virtual Color Reconstruction System for Real Heritage with Light Projection,” Proceedings of International Conference on Virtual Systems and Multimedia (VSMM), pp. 161–168, 2003.Google Scholar
  8. 8.
    M. Ashdown, T. Okabet, I. Sato, and Y. Sato, “Robust Content-Dependent Photometric Projector Compensation,” Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2006.Google Scholar
  9. 9.
    A. Grundhoefer and O. Bimber, “Real-Time Adaptive Radiometric Compensation,” IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 14, No. 1, pp. 97–108, 2008.CrossRefGoogle Scholar
  10. 10.
    H. Park, M.-H. Lee, S.-J. Kim, and J.-I. Park, “Specularity-Free Projection on Nonplanar Surface,” Proceedings of Pacific-Rim Conference on Multimedia (PCM) (2005), pp. 606–616.Google Scholar
  11. 11.
    R. Sukthankar, C. Tat-Jen, and G. Sukthankar, “Dynamic Shadow Elimination for Multi-Projector Displays,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. II, pp. 151–157, 2001.Google Scholar
  12. 12.
    C. Jaynes, S. Webb, and R. M. Steele, “Camera-Based Detection and Removal of Shadows from Interactive Multiprojector Displays,” IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 10, No. 3, pp. 290–301, 2004.CrossRefGoogle Scholar
  13. 13.
    M. S. Brown, P. Song, and T. - J. Cham, “Image Pre-Conditioning for Out-of-Focus Projector Blur,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. II, pp. 1956–1963, 2006.Google Scholar
  14. 14.
    Y. Oyamada and H. Saito, “Focal Pre-Correction of Projected Image for Deblurring Screen Image,” Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2007.Google Scholar
  15. 15.
    L. Zhang and S. K. Nayar, “Projection Defocus Analysis for Scene Capture and Image Display,” ACM Trans. Graph. (Siggraph), Vol. 25, No. 3, pp.907–915, 2006.CrossRefGoogle Scholar
  16. 16.
    M. Grosse and O. Bimber, “Coded Aperture Projection,” Proceedings of Emerging Display Technologies and Immersive Projection Technologies 2008 (EDT IPT08), 2008.Google Scholar
  17. 17.
    O. Bimber and A. Emmerling, “Multifocal Projection: A Multiprojector Technique for Increasing Focal Depth,” IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 12, No. 4, pp. 658–667, 2006.CrossRefGoogle Scholar
  18. 18.
    K. Fuji, M. Grossberg, and S. Nayar, “A Projector-Camera System with Real-Time Photometric Adaptation for Dynamic Environments,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. I, pp. 814–821, 2005.Google Scholar
  19. 19.
    O. Bimber and D. Iwai, “Superimposing Dynamic Range,” ACM Transactions on Graphics (ACM Siggraph Asia), Vol. 27, No. 5, article 150, 2008.Google Scholar
  20. 20.
    D. Cotting, M. Naef, M. H. Gross, and H. Fuchs, “Embedding Imperceptible Patterns into Projected Images for Simultaneous Acquisition and Display,” Proceedings of IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR), pp. 100–109, 2004.Google Scholar
  21. 21.
    S. Zollmann and O. Bimber, “Imperceptible Calibration for Radiometric Compensation,” Proceedings of Eurographics, pp. 61–64, 2007.Google Scholar
  22. 22.
    O. Bimber, F. Coriand, A. Kleppe, E. Bruns, S. Zollmann, and T. Langlotz, “Superimposing Pictorial Artwork with Projected Imagery,” IEEE MultiMedia, Vol. 12, No.1, pp. 16–26, 2005.CrossRefGoogle Scholar
  23. 23.
    O. Bimber and R. Raskar, “Spatial Augmented Reality: Merging Real and Virtual Worlds,” A K Peters LTD (publisher), ISBN: 1-56881-230-2, free e-book: www.SpatialAR.com, July 2005.
  24. 24.
    A. Grundhoefer and O. Bimber, “VirtualStudio2Go: Digital Videocomposition for Real Environments,” ACM Transactions on Graphics (ACM Siggraph Asia), Vol. 27, No. 5, article 151, 2008.Google Scholar
  25. 25.
    D. A. Bowman, E. Kruijff, J. J. Laviola, and I. Poupyrev, “3D User Interfaces: Theory and Practice,” Addison-Wesley, 2005.Google Scholar
  26. 26.
    R. Raskar and K. L. Low, “Interacting with spatially augmented reality,” Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation(AFRIGRAPH ’01), pp. 101–108, 2001.Google Scholar
  27. 27.
    A. Licsar and T. Sziranyi, “Hand gesture recognition in camera-projector system,” Proceedings of the ECCV 2004 Workshop on HCI - Computer Vision in Human-Computer Interaction, pp. 83–93, 2004.Google Scholar
  28. 28.
    R. Kjeldsen, C. Pinhanez, G. Pingali, J. Hartman, T. Levas, and M. Podlaseck, “Interacting with Steerable Projected Displays,” Proceedings of the 5th International Conference on Automatic Face and Gesture Recognition, pp. 12–17, 2002.Google Scholar
  29. 29.
    A. D. Wilson, “Playanywhere: a compact interactive tabletop projection-vision system,” Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST ’05), pp. 83–92, 2005.Google Scholar
  30. 30.
    Y. Wu, T. S. Huang, and N. Mathews, “Vision-based gesture recognition: A review,” Proceedings of the International Workshop on Gesture-Based Communication in Human-Computer Interaction, pp. 103–115, 1999.Google Scholar
  31. 31.
    G. Yahav, G. Iddan, and D. Mandelboum, “3d imaging camera for gaming application,” International Conference on Consumer Electronics (ICCE 2007) - Digest of Technical Papers, pp. 1–2, 2007.Google Scholar
  32. 32.
    D. R. Olsen and T. Nielsen, “Laser pointer interaction,” Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ’01), pp. 17–22, 2001.Google Scholar
  33. 33.
    B. Myers, R. Bhatnagar, J. Nichols, C. Peck, D. Kong, R. Miller, and A. Long, “Interacting at a distance: measuring the performance of laser pointers and other devices,” Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ’02), pp. 33–40, 2002.Google Scholar
  34. 34.
    X. Cao and R. Balakrishnan, “Visionwand: interaction techniques for large displays using a passive wand tracked in 3d,” Proceedings of the 16th annual ACM symposium on User interface software and technology (UIST ’03), pp. 173–182, 2003.Google Scholar
  35. 35.
    M. Piccardi, “Background subtraction techniques: a review,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Vol. 4, pp. 3099–3104, 2004.Google Scholar
  36. 36.
    T. Moeslund, A. Hilton, and V. Krüger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, Vol. 104, No. 2–3, pp. 90–126, 2006.CrossRefGoogle Scholar
  37. 37.
    X. Cao and R. Balakrishnan, “Interacting with dynamically defined information spaces using a handheld projector and a pen,” Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST ’06), pp. 225–234, 2006.Google Scholar
  38. 38.
    P. Beardsley, J. Van Baar, R. Raskar, and C. Forlines, “Interaction using a handheld projector,” IEEE Computer Graphics and Applications, Vol. 25, No. 1, pp. 39–43, 2005.CrossRefGoogle Scholar
  39. 39.
    C. Forlines, R. Balakrishnan, P. Beardsley, J. van Baar, and R. Raskar, “Zoom-and-pick: facilitating visual zooming and precision pointing with interactive handheld projectors,” Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST ’05), pp. 73–82, 2005.Google Scholar
  40. 40.
    X. Cao, C. Forlines, and R. Balakrishnan, “Multi-user interaction using handheld projectors,” Proceedings of the 20th annual ACM symposium on User interface software and technology (UIST ’07), pp. 43–52, 2007.Google Scholar
  41. 41.
    H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System,” Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality, Vol. 99, pp. 85–94, 1999.CrossRefGoogle Scholar
  42. 42.
    R. Raskar, P. Beardsley, J. van Baar, Y. Wang, P. Dietz, J. Lee, D. Leigh, and T. Willwacher, “RFIG Lamps: interacting with a self-describing world via photosensing wireless tags and projectors,” Proceedings of SIGGRAPH 2004, pp. 406–415, 2004.Google Scholar
  43. 43.
    S. Voida, M. Podlaseck, R. Kjeldsen, and C. Pinhanez, “A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment,” Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 611–620, 2005.Google Scholar
  44. 44.
    M. Podlaseck, C. Pinhanez, N. Alvarado, M. Chan, and E. Dejesus, “On interfaces projected onto real-world objects,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 802–803, 2003.Google Scholar
  45. 45.
    J. Underkoffler, B. Ullmer, and H. Ishii, “Emancipated pixels: real-world graphics in the luminous room,” Proceedings of the 26th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’99), pp. 385–392, 1999.Google Scholar
  46. 46.
    P. Dietz, R. Raskar, S. Booth, J. van Baar, K. Wittenburg, and B. Knep, “Multi-projectors and implicit interaction in persuasive public displays,” Proceedings of the working conference on Advanced visual interfaces, pp. 209–217, 2004.Google Scholar
  47. 47.
    A. Butz and A. Krüger, “Applying the Peephole Metaphor in a Mixed-Reality Room,” IEEE Computer Graphics and Applications, Vol. 26, No.1, pp. 56–63, 2006.CrossRefGoogle Scholar
  48. 48.
    D. Cotting and M. Gross, “Interactive environment-aware display bubbles,” Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST ’06), pp. 245–254, 2006.Google Scholar
  49. 49.
    G. Pingali, C. Pinhanez, T. Levas, R. Kjeldsen, and M. Podlaseck, “User-Following Displays,” Proceedings of IEEE International Conference on Multimedia and Expo, 2002.Google Scholar
  50. 50.
    D. Kurz, F. Hantsch, M. Grobe, A. Schiewe, and O. Bimber, “Laser pointer tracking in projector-augmented architectural environments,” Proceeding of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), pp.1–8, 2007.Google Scholar
  51. 51.
    R. Kjeldsen, A. Levas, and C. Pinhanez, “Dynamically reconfigurable vision-based user interfaces,” Machine Vision and Applications, Vol. 16, No. 1, pp. 6–12, 2004.CrossRefGoogle Scholar
  52. 52.
    R. Vertegaal and I. P. Poupyrev, “Introduction to Special Issue of Organic user interfaces,” Communications of ACM, Vol. 51, No. 6, 2008.Google Scholar
  53. 53.
    J. C. Lee, S. E. Hudson, and E. Tse, “Foldable interactive displays,” Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST ’08), pp. 287–290, 2008.Google Scholar
  54. 54.
    M. Jin, H. Zhang, X. Yang, and S. Xiao, “A Real-Time ProCam System for Interaction with Chinese Ink-and-Wash Cartoons,” Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2007.Google Scholar
  55. 55.
    Y. Yuan, X. Yang, and S. Xiao, “A Framework for Tangible User Interfaces within Projector-based Mixed Reality,” Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), 2007.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Faculty of Media Bauhaus-University WeimarWeimarGermany
  2. 2.School of Software Shanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations