Advertisement

Virtual Reality Based Immersive Telepresence System for Remote Conversation and Collaboration

  • Zhipeng Tan
  • Yuning Hu
  • Kun XuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10582)

Abstract

We developed a Virtual Reality (VR) based telepresence system providing novel immersive experience for remote conversation and collaboration. By wearing VR headsets, all the participants can be gathered into a same virtual space, with 3D cartoon Avatars representing them. The 3D VR Avatars can realistically emulate the head postures, facial expressions and hand motions of the participants, enabling them to conduct enjoyable group-to-group conversations with people spatially isolated from them. Moreover, our VR telepresence system offers conspicuously new manners for remote collaboration. For example, users can play PPT slides or watch videos together, or they can cooperate on solving a math problem by calculating on a virtual blackboard, all of which can be hardly achieved using conventional video-based telepresence system. Experiments show that our system can provide unprecedented immersive experience for tele-conversation and new possibilities for remote collaboration.

Keywords

Virtual reality Telepresence system VR avatar Remote collaboration Teleconferencing 

Notes

Acknowledgements

This work was supported by Research Grant of Beijing Higher Institution Engineering Research Center and the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (MC-IRSES, grant No. 612627).

References

  1. 1.
    Otsuka, K.: MMSpace: kinetically-augmented telepresence for small group-to-group conversations. In: Virtual Reality (VR) 2016 IEEE, pp. 19–28. IEEE (2016)Google Scholar
  2. 2.
    Maimone, A., Fuchs, H.: Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras. In: International symposium on mixed and augmented reality (2011)Google Scholar
  3. 3.
    Zhang, C., Cai, Q., Chou, P.A., Zhang, Z., Martin-Brualla, R.: Viewport: a distributed, immersive teleconferencing system with infrared dot pattern. IEEE Multimedia 20(1), 17–27 (2013)CrossRefGoogle Scholar
  4. 4.
    Zhu, Z., Martin, R.R., Pepperell, R., Burleigh, A.: 3D modeling and motion parallax for improved videoconferencing. Comput. Visual Media 2(2), 131–142 (2016)CrossRefGoogle Scholar
  5. 5.
    Fairchild, A.J., Campion, S.P., García, A.S., Wolff, R., Fernando, T., Roberts, D.J.: A mixed reality telepresence system for collaborative space operation. IEEE Trans. Circuits Syst. Video Technol. 27(4), 814–827 (2017)Google Scholar
  6. 6.
    Vasudevan, R., Zhou, Z., Kurillo, G., Lobaton, E., Bajcsy, R., Nahrstedt, K.: Real-time stereo-vision system for 3D teleimmersive collaboration. In: International Conference on Multimedia and Expo (2010)Google Scholar
  7. 7.
    Higuchi, K., Chen, Y., Chou, P. A., Zhang, Z., Liu, Z.: Immerseboard: immersive telepresence experience using a digital whiteboard. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2383–2392. ACM (2015)Google Scholar
  8. 8.
    Ichim, A., Bouaziz, S., Pauly, M.: Dynamic 3D avatar creation from hand-held video input. Int. Conf. Comput. Graph. Interact. Tech. 34(4), 45:1–45:14 (2015)Google Scholar
  9. 9.
    Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 41:1–41:10 (2013)CrossRefzbMATHGoogle Scholar
  10. 10.
    Edwards, P., Landreth, C., Fiume, E., Singh, K.: JALI: an animator-centric viseme model for expressive lip synchronization. ACM Trans. Graph. (TOG) 35(4), 127 (2016)CrossRefGoogle Scholar
  11. 11.
    Gao, Z., Yu, Y., Zhou, Y., Du, S.: Leveraging two kinect sensors for accurate full-body motion capture. Sensors 15(9), 24297–24317 (2015)CrossRefGoogle Scholar
  12. 12.
    Fang, B., Sun, F., Liu, H., Guo, D.: A novel data glove using inertial and magnetic sensors for motion capture and robotic arm-hand teleoperation. Ind. Robot Int. J. 44(2), 155–165 (2017)CrossRefGoogle Scholar
  13. 13.
    Bradski, G.: Opencv Libr. Doct. Dobbs J. 25(11), 120–126 (2000)Google Scholar
  14. 14.
    Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 532–539 (2013)Google Scholar
  15. 15.
    Xie, N., Yuan, T., Chen, N., Zhou, X., Wang, Y., Zhang, X.: Rapid DCT-based LipSync generation algorithm for game making. In: SIGGRAPH ASIA 2016 Posters, p. 2. ACM (2016)Google Scholar
  16. 16.
    Hoon, L., Chai, W., Rahman, K.: Development of real-time lip sync animation framework based on viseme human speech. Arch. Des. Res. 27(4), 19–29 (2014)Google Scholar
  17. 17.
    Aristidou, A., Lasenby, J.: FABRIK: a fast, iterative solver for the inverse kinematics problem. Graph. Models 73(5), 243–260 (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Science and TechnologyTsinghua UniversityBeijingChina
  2. 2.City CollegeZhejiang UniversityHangzhouChina

Personalised recommendations