Beyond the Desktop Metaphor: Toward More Effective Display, Interaction, and Telecollaboration in the Office of the Future via a Multitude of Sensors and Displays

  • Henry Fuchs
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1554)


We are engaged in a long-term project to improve personal productivity for computer-related activities and tele-collaboration in an office environment of the future. Personal computer-related activities, we believe will be enhanced by capability to project imagery on any surface in the office, that together with precise head and eye-tracking, will enable head-tracked stereo imagery to be added to the user’s views of his/her office environment — creating a 3D immersive generalization of the now ubiquitous 2D desktop metaphor as the principal human-computer interface. We plan to realize this kind of system by mounting many video projectors and video cameras around the room, especially around the ceiling. The projectors may provide the only source of light in the room and will allow detailed imagery to be projected (almost) everywhere in the office. In order to generate the appropriate imagery, however, a detailed 3D map of the changing office environment needs to be acquired. This will be acquired by measuring, with synchronized cameras and projectors, the precise 3D location(s) of the surface(s) light up by each pixel of each projector. Local collaboration will be enhanced by tracking each of several individuals in the office and generating (by time-division multiplexing or by other means), a stereo image pair appropriate for each individual. Objects under design may be displayed, for each individual, from his/her own perspective and to his/her own specifications of interest. TELE-collaboration activities, we believe will be enhanced by having such an enhanced office environment for each of the small group of distant collaborators, and displaying for each participant, in addition to the shared objects under discussion, some combination of the remote scenes that include the changing 3D images of each of the participants and 3D images of physical objects of joint interest. To realize many of these capabilities, each user may need to wear polarized eyeglasses to perceive proper stereo imagery. Although initial results are encouraging, numerous difficult problems remain — how, for example, can imagery be projected onto a dark-colored surface in the room. The cost of such systems, with many projectors and cameras, image generators and image acquisition devices, may initially be prohibitively expensive, but is expected to decrease as the cost of such off-the-shelf equipment naturally decreases with increased market size. The positive psychological effects of working and interacting in such an immersive environment within a “standard” office will be so compelling, we believe, that users will not readily wish to return to working within the constraints of a 21” monitor. Much of this work is being carried out as part of a collaboration among the five sites of the NSF Science and Technology Center in Computer Graphics and Scientific Visualization (Brown, Caltech, Cornell, UNC, and Utah) and is also being carried on in collaboration with the GRASP Lab at University of Pennsylvania, and is part of the National Teleimmersion Initiative sponsored by Advanced Network and Services.


Virtual Reality Augmented Reality Office Environment Graduate Research Assistant Stereo Image Pair 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Aliaga, J., D. Cohen, A. Wilson. H. Zhang, C. Erickson, K. Hoff, T. Hudson, W. Stüerzlinger, E. Baker, R. Bastos, M. Whitton, F. Brooks, D. Manocha. 1998. “A Framework for the Real-Time Walkthrough of Massive Models,“ UNC Computer Science Technical Report TR98-013, University of North Carolina at Chapel Hill, March 1998.Google Scholar
  2. Azuma, R., Bishop, G. Improving Static and Dynamic Registration in an Optical Seethrough HMD. Proceedings of SIGGRAPH 94 (Orlando, Florida, July 24–29, 1994). In Computer Graphics Proceedings, Annual Conference Series, 1994, ACM SIGGRAPH, pp. 197–204.Google Scholar
  3. Bajaj, C.L., F. Bernardini, and G. Xu. “Automatic reconstruction of surfaces and scalar fields from 3D scans,“ SIGGRAPH 95 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley, pp. 109–118, August 1995.Google Scholar
  4. Bowen, Loftin, R. “Hands Across the Atlantic,“ IEEE Computer Graphics and Applications, Vol. 17, No. 2, pp. 78–79, March–April 1997.Google Scholar
  5. Bryson, Steve, David Zeltzer, Mark T. Bolas, Bertrand de La Chapelle, and David Bennett. “The Future of Virtual Reality: Head Mounted Displays Versus Spatially Immersive Displays,“ SIGGRAPH 97 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley, pp. 485–486, August 1997.Google Scholar
  6. Buxton, W., Sellen, A. & Sheasby, M. “Interfaces for multiparty videoconferencing,“ In K. Finn, A. Sellen & S. Wilber (Eds.). Video Mediated Communication. Hillsdale, N.J.: Erlbaum, pp. 385–400, 1997.Google Scholar
  7. Capin, Tolga K., Hansrudi Noser, Daniel Thalmann, Igor Sunday Pandzic and Nadia Magnenat Thalman. “Virtual Human Representation and Communication in VLNet,“ IEEE Computer Graphics and Applications, Vol. 17, No. 2, pp. 42–53, March–April 1997.CrossRefGoogle Scholar
  8. Chi, Vern, Matt Cutts, Henry Fuchs, Kurtis Keller, Greg Welch, Mark Bloomenthal, Elaine Cohen, Sam Drake, Russ Fish, Rich Riesenfeld. 1998. “A Wide Field-of-View Camera Cluster“, University of North Carolina at Chapel Hill, Dept of Computer Science, Technical Report TR98-018.Google Scholar
  9. Chien, C.H., Y.B. Sim, and J.K. Aggarwal. “Generation of volume/surface octree from range data,“ The Computer Graphics Society Conference on Computer Vision and Pattern Recognition, pp. 254–260, June 1988.Google Scholar
  10. Conner, D.B, Cutts, M., Fish, R., Fuchs, H., Holden, L., Jacobs, M., Loss, B., Markosian, L., Riesenfeld, R., and Turk, G. “An Immersive Tool for Wide-Area Collaborative Design,“ TeamCAD, the First Graphics Visualization, and Usability (GVU) Workshop on Collaborative Design. Atlanta, Georgia, May 12–13, 1997.Google Scholar
  11. Connolly, C.I. “Cumulative generation of octree models from range data,“ Proceedings, Int’l. Conference Robotics, pp. 25–32, March 1984.Google Scholar
  12. Cruz-Neira, Carolina, Daniel J. Sandin, and Thomas A. DeFanti. “Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE,“ Computer Graphics, SIGGRAPH Annual Conference Proceedings, 1993.Google Scholar
  13. Cutts, Matt, Henry Fuchs, Adam Lake, Ramesh Raskar, Lev Stesin, and Greg Welch. 1998. “The Office of the Future: A Unified Approach to Image-Based Modeling and Spatially Immersive Displays,“ SIGGRAPH 98 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison Wesley, July 1998, Orlando, FL, USAGoogle Scholar
  14. Curless, Brian, and Marc Levoy. “A Volumetric Method for Building Complex Models from Range Images,“ SIGGRAPH 96 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison-Wesley. pp. 303–312, 1996.Google Scholar
  15. De Piero, F.W., and Trivedi, M.M., “3-D Computer Vision using Structured Light: Design, Calibration, and Implementation Issues,“ Advances in Computers(43), 1996, Academic Press, pp.243–278Google Scholar
  16. Dias, José Miguel Salles, Ricardo Galli, António Carlos Almeida, Carlos A. C. Belo, and José Manuel Rebordã “mWorld: A Multiuser 3D Virtual Environment,“ IEEE Computer Graphics and Applications, Vol. 17, No. 2., pp. 55–64, March–April 1997.CrossRefGoogle Scholar
  17. Fuchs, Henry, Gary Bishop, Kevin Arthur, Leonard McMillan, Ruzena Bajcsy, Sang Lee, Hany Farid, and Takeo Kanade. “Virtual Space Teleconferencing Using a Sea of Cameras,“ Proceedings of the First International Symposium on Medical Robotics and Computer Assisted Surgery, (Pittsburgh, PA.) Sept 22–24, 1994.Google Scholar
  18. Gajewska, Hania, Jay Kistler, Mark S. Manasse, and David D. Redell. “Argo: A System for Distributed Collaboration,“ (DEC, Multimedia’ 94)Google Scholar
  19. Gibbs, Simon, Constantin Arapis and Christian J. Breiteneder. “TELEPORT-Towards Immersive Copresence,“ to appear in ACM Multimedia Systems Journal, 1998.Google Scholar
  20. Hilton, A., A.J. Toddart, J. Illingworth, and T. Windeatt. “Reliable surface reconstruction from multiple range images,“ In Fouth European Conference on Computer Vision, Volume 1, pp. 117–126. April 1996.Google Scholar
  21. Holloway, R. “Registration Errors in Augmented Reality Systems,“ PhD Thesis. University of North Carolina at Chapel Hill, 1995. Beyond the Desktop Metaphor 37Google Scholar
  22. Holmes, Richard E. “Common Projector and Display Modules for Aircraft Simulator Visual Systems,“ Presented at the IMAGE V Conference, Phoenix, AZ, June 19–22, pp. 81–88, 1990.Google Scholar
  23. Hornbeck, Larry J., “Deformable-Mirror Spatial Light Modulators,“ Proceedings SPIE, Vol. 1150, Aug 1989.Google Scholar
  24. Hornbeck, Larry J., “Digital Light Processing for High-Brightness High-Resolution Applications,“ Available from white/hornbeck.pdf, 1995.
  25. Ichikawa, Y., Okada, K., Jeong, G., Tanaka, S. and Matsushita, Y.: “MAJIC Videoconferencing System: Experiments, Evaluation and Improvement’,“ In Proceedings of ECSCW’95, pp. 279–292, Sept. 1995.Google Scholar
  26. Ishii, Hiroshi, Minoru Kobayashi, Kazuho Arita. “Iterative Design of Seamless Collaboration Media,“ CACM, Volume 37, Number 8, pp. 83–97, August 1994.Google Scholar
  27. Kanade, Takeo and Haruhiko Asada. “Noncontact Visual Three-Dimensional Ranging Devices,“ Proceedings of SPIE: 3D Machine Perception. Volume 283, Pages 48–54. April 23–24, 1981.Google Scholar
  28. Kanade, Takeo, Hiroshi Kano, Shigeru Kimura, Atsushi Yoshida, Kazuo Oda. “Development of a Video-Rate Stereo Machine,“ Proceedings of International Robotics and Systems Conference (IROS’ 95). pp. 95–100, Pittsburgh, PA., August 5–9, 1995.Google Scholar
  29. Lamotte, Wim, Eddy Flerackers, Frank Van Reeth, Rae Earnshaw, Joao Mena De Matos. Visinet: Collaborative 3D Visualization and VR over ATM Networks. IEEE Computer Graphics and Applications, Vol. 17, No. 2, pp. 66–75, March–April 1997.CrossRefGoogle Scholar
  30. Lehner, Valerie D., and Thomas A. DeFanti. “Distributed Virtual Reality: Supporting Remote Collaboration in Vehicle Design,“ IEEE Computer Graphics and Applications, Vol. 17, No. 2, pp. 13–17, March–April 1997.CrossRefGoogle Scholar
  31. Lyon, Paul. “Edge-blending Multiple Projection Displays On A Dome Surface To Form Continuous Wide Angle Fields-of-View,“ pp. 203–209. Proceedings of 7th I/ITEC, 1985.Google Scholar
  32. Macedonia, Michale R. and Stefan Noll. “A Transatlantic Research and Development Environment,“ IEEE Computer Graphics and Applications, Vol. 17, No. 2, pp. 76–82, March–April 1997.CrossRefGoogle Scholar
  33. Mandeville, J., T. Furness, M. Kawahata, D. Campbell, P. Danset, A. Dahl, J. Dauner, J. Davidson, K. Kandie, and P. Schwartz. “GreenSpace: Creating a Distributed Virtual Environment for Global Applications,“ Proceedings of IEEE Networked Virtual Reality Workshop, 1995.Google Scholar
  34. Milgram, P and Kishino, F. “A taxonomy of mixed reality visual displays“, IEICE (Institute of Electronics, Information and Communication Engineers) Transactions on Information and Systems, Special issue on Networked Reality, Dec. 1994.Google Scholar
  35. Milgram, P., Takemura, Utsumi and F. Kishino. “Augmented Reality: A class of displays on the reality-virtuality continuum“. SPIE Vol. 2351-34, Telemanipulator and Telepresence Technologies, 1994.Google Scholar
  36. Nayar, Shree, Masahiro Watanabe, Minori Noguchi. “Real-time Focus Range Sensor,“ Columbia University, CUCS-028-94.Google Scholar
  37. Neumann, Ulrich and Henry Fuchs, “A Vision of Telepresence for Medical Consultation and Other Applications,“ Proceedings of the Sixth International Symposium on Robotics Research, Hidden Valley, PA, Oct. 1–5, 1993, pp. 565–571.Google Scholar
  38. Ohya, Jun, Kitamura, Yasuichi, Takemura, Haruo, et al. “Real-time Reproduction of 3D Human Images in Virtual Space Teleconferencing,“ IEEE Virtual Reality International Symposium. Sep 1993.Google Scholar
  39. Raskar, Ramesh, Matt Cutts, Greg Welch, Wolfgang Stüerzlinger. “Efficient Image Generation for Multiprojector and Multisurface Displays,“ University of North Carolina at Chapel Hill, Dept of Computer Science, Technical Report TR98-016, 1998.Google Scholar
  40. Raskar, Ramesh, Henry Fuchs, Greg Welch, Adam Lake, Matt Cutts. “3D Talking Heads: Image Based Modeling at Interactive rate using Structured Light Projection,“ University of North Carolina at Chapel Hill, Dept of Computer Science, Technical Report TR98-017, 1998Google Scholar
  41. Segal, Mark, Carl Korobkin, Rolf van Widenfelt, Jim Foran, and Paul E. Haeberli. 1992. “Fast shadows and lighting effects using texture mapping,“ SIGGRAPH 92 Conference Proceedings, Annual Conference Series, ACM SIGGRAPH, Addison Wesley, volume 26, pages 249–252, July 1992.Google Scholar
  42. Slater, M. and M. Usoh. (1994). “Body-Centered Interaction in Immersive Virtual Environments,“ in N. Magnenat Thalmann and D. Thalmann (Eds.), Artificial Life and Virtual Reality (pp. 125–148). London: WileyGoogle Scholar
  43. State, A., Hirota, G., Chen, D.T., Garrett, W.F., Livingston, M.A. “Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking“. Proceedings of SIGGRAPH’ 96 (New Orleans, LA, August 4–9, 1996). In Computer Graphics Proceedings, Annual Conference Series, 1996, ACM SIGGRAPH.Google Scholar
  44. Tsai, Roger Y. “An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision,“ Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, FL, pp. 364–374, 1986.Google Scholar
  45. Underkoffler, J. “A View From the Luminous Room,“ Personal Technologies, Vol. 1, No. 2, pp. 49–59, June 1997.CrossRefGoogle Scholar
  46. Underkoffler, J., and Hiroshi Ishii. “Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface,“ Proceedings of CHI’ 98, ACM, April 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Henry Fuchs
    • 1
  1. 1.Department of Computer ScienceUniversity of North Carolina at Chapel HillUSA

Personalised recommendations