Skip to main content

Abstract

We report on a prototype system helping actors on a stage to interact and perform with actors on other stages as if they were on the same stage. At each stage four 3D cameras tiled back to back for an almost 360 degree view, continuously record actors. The system processes the recorded data on-the-fly to discover actions by actors that it should react to, and it streams data about actors and their actions to remote stages where each actor is represented by a remote presence, a visualization of the actor. When the remote presences lag behind too much because of network and processing delays, the system applies various techniques to hide this, including switching rapidly to a pre-recorded video or animations of individual actors. The system amplifies actors’ actions by adding text and animations to the remote presences to better carry the meaning of actions across distance. The system currently scales across the Internet with good performance to three stages, and comprises in total 15 computers, 12 cameras, and several projectors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sawchuk, A., Chew, E., Zimmermann, R., Papadopoulos, C., Kyriakakis, C.: From remote media immersion to distributed immersive performance. In: Proceedings of the 2003 ACM SIGMM Workshop on Experiential Telepresence, pp. 110–120. ACM (2003)

    Google Scholar 

  2. Chew, E., Kyriakakis, C., Papadopoulos, C., Sawchuk, A., Zimmermann, R.: Distributed immersive performance: Enabling technologies for and analyses of remote performance and collaboration. In: NIME 2006 (2006)

    Google Scholar 

  3. Zimmermann, R., Chew, E., Ay, S., Pawar, M.: Distributed musical performances: Architecture and stream management. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) 4(2), 14 (2008)

    Google Scholar 

  4. Sato, Y., Hashimoto, K., Shibata, Y.: A new remote camera work system for teleconference using a combination of omni-directional and network controlled cameras. In: 22nd International Conference on Advanced Information Networking and Applications, AINA 2008, pp. 502–508. IEEE (2008)

    Google Scholar 

  5. Tang, A., Pahud, M., Inkpen, K., Benko, H., Tang, J., Buxton, B.: Three’s company: understanding communication channels in three-way distributed collaboration. In: Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, pp. 271–280. ACM (2010)

    Google Scholar 

  6. Sakamoto, D., Kanda, T., Ono, T., Ishiguro, H., Hagita, N.: Android as a telecommunication medium with a human-like presence. In: 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 193–200. ACM (2007)

    Google Scholar 

  7. Dou, M., Shi, Y., Frahm, J., Fuchs, H., Mauchly, B., Marathe, M.: Room-sized informal telepresence system. In: 2012 IEEE Virtual Reality Workshops (VR), pp. 15–18. IEEE (2012)

    Google Scholar 

  8. Petit, B., Lesage, J., Menier, C., Allard, J., Franco, J., Raffin, B., Boyer, E., Faure, F.: Multicamera real-time 3d modeling for telepresence and remote collaboration. International Journal of Digital Multimedia Broadcasting 2010 (2009)

    Google Scholar 

  9. Essid, S., Lin, X., Gowing, M., Kordelas, G., Aksay, A., Kelly, P., Fillon, T., Zhang, Q., Dielmann, A., Kitanovski, V., et al.: A multi-modal dance corpus for research into interaction between humans in virtual environments. Journal on Multimodal User Interfaces, 1–14 (2012)

    Google Scholar 

  10. Elgendi, M., Picon, F., Magnenat-Thalmann, N.: Real-time speed detection of hand gesture using kinect. In: Springer (ed.) Proceedings of the Autonomous Social Robots and Virtual Humans Workshop, 25th Annual Conference on Computer Animation and Social Agents (2012)

    Google Scholar 

  11. Stodle, D., Troyanskaya, O., Li, K., Anshus, O.: Tech-note: Device-free interaction spaces. In: IEEE Symposium on 3D User Interfaces, 3DUI 2009, pp. 39–42. IEEE (2009)

    Google Scholar 

  12. Go, http://golang.org/

  13. Horde3d, http://www.horde3d.org/

  14. Planetlab, https://www.planet-lab.eu/

  15. Psutil, http://code.google.com/p/psutil/

  16. PingER, http://www-iepm.slac.stanford.edu/pinger/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Su, F., Tartari, G., Bjørndalen, J.M., Ha, P.H., Anshus, O.J. (2013). MultiStage: Acting across Distance. In: Nesi, P., Santucci, R. (eds) Information Technologies for Performing Arts, Media Access, and Entertainment. ECLAP 2013. Lecture Notes in Computer Science, vol 7990. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40050-6_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40050-6_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40049-0

  • Online ISBN: 978-3-642-40050-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics