Advertisement

User Modeling pp 215-226 | Cite as

Cinematographic User Models for Automated Realtime Camera Control in Dynamic 3D Environments

  • William H. Bares
  • James C. Lester
Part of the International Centre for Mechanical Sciences book series (CISM, volume 383)

Abstract

Advances in 3D graphics technology have accelerated the construction of dynamic 3D environments. Despite their promise for scientific and educational applications, much of this potential has gone unrealized because runtime camera control software lacks user-sensitivity. Current environments rely on sequences of viewpoints that directly require the user’s control or are based primarily on actions and geometry of the scene. Because of the complexity of rapidly changing environments, users typically cannot manipulate objects in environments while simultaneously issuing camera control commands. To address these issues, we have developed UCam, a realtime camera planner that employs cinematographic user models to render customized visualizations of dynamic 3D environments. After interviewing users to determine their preferred directorial style and pacing, UCam examines the resulting cinematographic user model to plan camera sequences whose shot vantage points and cutting rates are tailored to the user in realtime. Evaluations of UCam in a dynamic 3D testbed are encouraging.

Keywords

User Model Camera Position Virtual Camera Camera Control Camera Transition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. André, E., and Rist, T. (1996). Coping with temporal constraints in multimedia presentation planning. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, 142–147.Google Scholar
  2. André, E., Finkler, W., Graf, W., Rist, T., Schauder, A., and Wahlster, W. (1993). WIP: The automatic synthesis of multi-modal presentations. In Maybury, M. T., ed., Intelligent Multimedia Interfaces. AAAI Press, chapter 3.Google Scholar
  3. Butz, A. (1997). Anymation with CATHI. To appear in Proceedings of the Innovative Applications of Artificial Intelligence Conference.Google Scholar
  4. Christianson, D. B., Anderson, S. E., He, L.-W., Salesin, D. H., Weld, D. S., and Cohen, M. F. (1996). Declarative camera control for automatic cinematography. In Proceedings of the Thirteenth National Conference on A rtificial Intelligence, 148–155.Google Scholar
  5. Drucker, S., and Zeltzer, D. (1995). CamDroid: A system for implementing intelligent camera control. In Proceedings of the 1995 Symposium on Interactive 3D Graphics, 139–144.CrossRefGoogle Scholar
  6. Karp, P., and Feiner, S. (1993). Automated presentation planning of animation using task decomposition with heuristic reasoning. In Proceedings of Graphics Interface ’93, 118–127.Google Scholar
  7. Lester, J. C., FitzGerald, P. J., and Stone, B. A. (1997). The pedagogical design studio: Exploiting artifact-based task models for constructivist learning. In Proceedings of the Third International Conference on Intelligent User Interfaces, 155–162.Google Scholar
  8. Mackinlay, J., Card, S., and Robertson, G. (1990). Rapid controlled movement through a virtual 3D workspace. In Proceedings of ACM SIGGRAPH ’90, 171–176.Google Scholar
  9. Mascelli, J. (1965). The Five C’s of Cinematography. Cine/Grafic Publications, Hollywood.Google Scholar
  10. McKeown, K. R., Feiner, S. K., Robin, J., Seligmann, D., and Tanenblatt, M. (1992). Generating cross-references for multimedia explanation. In Proceedings of the Tenth National Conference on Artificial Intelligence, 9–15.Google Scholar
  11. Millerson, G. (1994). Video Camera Techniques. Focal Press, Oxford, England.Google Scholar
  12. Roth, S. F., Mattis, J., and Mesnard, X. (1991). Graphics and natural language as components of automatic explanation. In Sullivan, J. W., and Tyler, S. W., eds., Intelligent User Interfaces. New York: Addison-Wesley. 207–239.Google Scholar
  13. Stone, B. A., and Lester, J. C. (1996). Dynamically sequencing an animated pedagogical agent. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, 424–431.Google Scholar
  14. van Mulken, S. (1996). Reasoning about the user’s decoding of presentations in an intelligent multimedia presentation system. In Proceedings of the Fifth International Conference on User Modeling, 61–14.Google Scholar
  15. Ware, C., and Osbom, S. (1990). Exploration and virtual camera control in virtual three dimensional environments. In 1990 Symposium on Interactive 3D Graphics, 175–184.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 1997

Authors and Affiliations

  • William H. Bares
    • 1
  • James C. Lester
    • 1
  1. 1.Multimedia Laboratory, Department of Computer ScienceNorth Carolina State UniversityRaleighUSA

Personalised recommendations