Advertisement

Instant Movie Casting with Personality: Dive into the Movie System

  • Shigeo Morishima
  • Yasushi Yagi
  • Satoshi Nakamura
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6774)

Abstract

“Dive into the Movie (DIM)” is a name of project to aim to realize a world innovative entertainment system which can provide an immersion experience into the story by giving a chance to audience to share an impression with his family or friends by watching a movie in which all audience can participate in the story as movie casts. To realize this system, we are trying to model and capture the personal characteristics instantly and precisely in face, body, gait, hair and voice. All of the modeling, character synthesis, rendering and compositing processes have to be performed on real-time without any manual operation. In this paper, a novel entertainment system, Future Cast System (FCS), is introduced as a prototype of DIM. The first experimental trial demonstration of FCS was performed at the World Exposition 2005 in which 1,630,000 people have experienced this event during 6 months. And finally up-to-date DIM system to realize more realistic sensation is introduced.

Keywords

Personality Modeling Gait Motion Entertainment Face Capture 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Fujimura, K., et al.: Multi-camera 3d modeling system to digitize human head and body. In: Proceedings of SPIE, pp. 40–47 (2001)Google Scholar
  2. 2.
    Wiskott, L., et al.: Face recognition and gender determination. In: FG 1999, pp. 92–97 (1999)Google Scholar
  3. 3.
    Kawade, M.: Vision-based face understanding technologies and applications. In: Proc. of 2002 International Symposium on Micromechatronics and Human Science, pp. 27–32 (2002)Google Scholar
  4. 4.
    Zhang, L., et al.: A fast and robust automatic face alignment system. In: ICCV 2005 (2005)Google Scholar
  5. 5.
    Moghaddam, B., Yang, M.: Gender classification with support vector machines. In: Proceedings of the fourth IEEE FG 2000, pp. 306–311 (2000)Google Scholar
  6. 6.
    Lian, H., et al.: Gender recognition using a min–max modular support vector machine. In: International Conference on Natural Computation, pp. 438–441 (2005)Google Scholar
  7. 7.
    Parke, F.I.: Computer generated animation of faces. In: ACM 1972: Proceedings of the ACM annual Conference (1972)Google Scholar
  8. 8.
    Lyons, M., et al.: Avatar creation using automatic face processing. In: Proceedings of the sixth ACM international conference on Multimedia, pp. 427–434 (1998)Google Scholar
  9. 9.
    Pighin, F., et al.: Synthesizing realistic facial expressions from photographs. In: SIGGRAPH 1998 (1998)Google Scholar
  10. 10.
    Blanz, V., Vetter, T.: A morphable model for the synthesis of 3d faces. In: Proceedings of SIGGRAPH 1999, pp. 187–194 (1999)Google Scholar
  11. 11.
    Blanz, V., et al.: Reanimating faces in images and video. In: Computer Graphics Forum 2003 (2003)Google Scholar
  12. 12.
    Ezzat, T., et al.: Trainable videorealistic speech animation. In: SIGGRAPH 2002, pp. 388–398 (2002)Google Scholar
  13. 13.
    Lee, Y., Terzopulus, D., Waters, K.: Realistic modeling for facial animation. In: Proceedings of SIGGRAPH 1995, pp. 55–62 (1995)Google Scholar
  14. 14.
    Choe, B., Lee, H., Ko, H.: Performance-driven muscle-based facial animation. In: Proceedings of Computer Animation, pp. 67–79 (2001)Google Scholar
  15. 15.
    Sifakis, E., Neverov, I., Fedkiw, R.: Automatic determination of facial muscle activations from sparse motion capture marker data, pp. 417–425 (2005)Google Scholar
  16. 16.
    DeCarlo, D., Metaxas, D.: The integration of optical flow and deformable models with applications to human face shape and motion estimation. In: CVPR 1996, pp. 231–238 (1996)Google Scholar
  17. 17.
    Joshi, P., Tien, W.C., Desbrun, M., Pighin, F.: Learning controls for blend shape based realistic facial animation. In: SCA 2003, pp. 187–192 (2003)Google Scholar
  18. 18.
    Xiang, J., Chai, Xiao, J., Hodgins, J.: Vision-based control of 3d facial animation. In: SCA 2003, pp. 193–206 (2003)Google Scholar
  19. 19.
    Vlasic, D., Brand, M., Pfister, H., Popovi, J.: Face transfer with multilinear models. ACM Transactions on Graphics 24(3), 426–433 (2005)CrossRefGoogle Scholar
  20. 20.
    Noh, J.Y., Neumann, U.: Expression cloning. In: SIGGRAPH 2001, pp. 277–288 (2001)Google Scholar
  21. 21.
    Krishnaswamy, A., Baranoski, G.V.G.: A biophysically-based spectral model of light interaction with human skin. Comput. Graph. Forum, 331–340 (2004)Google Scholar
  22. 22.
    Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: SIGGRAPH 2000, pp. 145–156 (2000)Google Scholar
  23. 23.
    Borshukov, G., Lewis, J.P.: Realistic human face rendering for ”the matrix reloaded. In: ACM SIGGRAPH 2003 Technical Sketch (2003)Google Scholar
  24. 24.
    Sander, P.V., Gosselin, D., Mitchell, J.L.: Real-time skin rendering on graphics hardware. In: SIGGRAPH 2004, Sketches, p. 148 (2004)Google Scholar
  25. 25.
    Weyrich, T., et al.: Analysis of human faces using a measurement-based skin reflectance model. In: SIGGRAPH 2006, pp. 1013–1024 (2006)Google Scholar
  26. 26.
    Cavazza, M., Charles, F., Mead, S.J.: Interacting with virtual characters in interactive storytelling. In: Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems: part 1, pp. 318–325 (2002)Google Scholar
  27. 27.
    Cheok, A.D., Weihua, W., Yang, X., Prince, S., Wan, F.S., Billinghurst, I.M., Kato, H.: Interactive theatre experience in embodied + wearable mixed reality space. In: Proc. of ISMAR 2002, pp. 59–317 (2002)Google Scholar
  28. 28.
    Mateas, M., Stern, A.: Facade: An experiment in building a fully-realized interactive drama. In: Proceedings of Game Developers’ Conference: Game Design Track (2003)Google Scholar
  29. 29.
    Maejima, A., et al.: Realistic Facial Animation by Automatic Individual Head Modeling and Facial Muscle Adjustment. In: HCI International 2011, ID:1669 (to appera, 2011)Google Scholar
  30. 30.
    Makihara, Y., et al.: The Online Gait Measurement for Characteristic Gait Animation Synthesis. In: Proc.of HCI International 2011, ID:1768 (to appear, 2011)Google Scholar
  31. 31.
    Shin-ichi, K., et al.: Personalized Voice Assignment Techniques for Synchronized Scenario Speech. In: HCI International 2011, ID:2218 (to appear, 2011) Google Scholar
  32. 32.
    Mashita, T., et al.: ‘Measuring and Modeling of Multi-layered Subsurface Scattering for Human Skin. In: Proc.of HCI International 2011, ID:2219 (to appear, 2011)Google Scholar
  33. 33.
    Kondo, K., et al.: Providing Immersive Virtual Experience with First-person Perspective Omnidirectional Movies and Three Dimensional Sound Field. In: HCI International 2011, ID:1733 (to appear, 2011) Google Scholar
  34. 34.
    Enomoto, S., et al.: 3-D sound reproduction system for immersive environments based on the boundary surface control principle. In: Proc. HCI International 2011, ID: 1705 (to appear, 2011) Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Shigeo Morishima
    • 1
  • Yasushi Yagi
    • 2
  • Satoshi Nakamura
    • 3
  1. 1.Dept. of Advanced Science and EngineeringWaseda UniversityTokyoJapan
  2. 2.The Institute of Scientific and Industrial ResearchOsaka UniversityOsakaJapan
  3. 3.National Institute of Information and Communications TechnologyKyotoJapan

Personalised recommendations