Abstract
This contribution examines the problem of linking two remote rooms into one shared teleconference space using augmented reality (AR). Previous work in remote collaboration focusses either on the display of data and participants or on the interactions required to complete a given task. The surroundings are usually either disregarded entirely or one room is chosen as the “hosting” room which serves as the reference space. In this paper, we aim to integrate the two surrounding physical spaces of the users into the virtual conference space. We approach this problem using techniques borrowed from computational geometric analysis, from computer graphics and from 2D image processing. Our goal is to provide a thorough discussion of the problem and to describe an approach to creating consensus realities for use in AR videoconferencing.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Kammerl, J., Blodow, N., Rusu, R., Gedikli, S., Beetz, M., Steinbach, E.: Real-time compression of point cloud streams. In: ICRA, pp. 778–785 (2012)
Ruhnke, M., Bo, L., Fox, D., Burgard, W.: Compact rgbd surface models based on sparse coding. In: AAAI (2013)
Lehment, N.H., Erhardt, K., Rigoll, G.: Interface design for an inexpensive hands-free collaborative videoconferencing system. In: ISMAR, pp. 295–296 (2012)
Sodhi, R.S., Jones, B.R., Forsyth, D., Bailey, B.P., Maciocci, G.: Bethere: 3d mobile collaboration with spatial input. In: ACM SIGCHI, pp. 179–188 (2013)
Kim, K., Bolton, J., Girouard, A., Cooperstock, J., Vertegaal, R.: TeleHuman: Effects of 3D perspective on gaze and pose estimation with a life-size cylindrical telepresence pod. In: Human Factors in Computing Systems, pp. 2531–2540 (2012)
Adalgeirsson, S.O., Breazeal, C.: Mebot: a robotic platform for socially embodied presence. In: HRI, pp. 15–22 (2010)
Michaud, F., Boissy, P., Labonte, D., Corriveau, H., Grant, A., Lauria, M., Cloutier, R., Roux, M.A., Iannuzzi, D., Royer, M.P.: Telepresence robot for home care assistance. In: Multidisciplinary Collaboration for Socially Assistive Robotics, pp. 50–55 (2007)
Venolia, G., Tang, J., Cervantes, R., Bly, S., Robertson, G., Lee, B., Inkpen, K.: Embodied social proxy: mediating interpersonal connection in hub-and-satellite teams. In: SIGCHI, pp. 1049–1058 (2010)
Steptoe, W., Normand, J.M., Oyekoya, O., Pece, F., Giannopoulos, E., Tecchia, F., Steed, A., Weyrich, T., Kautz, J., Slater, M.: Acting rehearsal in collaborative multimodal mixed reality environments. Presence: Teleoperators and Virtual Environments 21(4), 406–422 (2012)
Gross, M., Würmlin, S., Naef, M., Lamboray, E., Spagno, C., Kunz, A., Koller-Meier, E., Svoboda, T., Van Gool, L., Lang, S., Strehlke, K., Moere, A.V., Staadt, O.: Blue-c: A spatially immersive display and 3D video portal for telepresence. In: SIGGRAPH, pp. 819–827. ACM (2003)
Kurillo, G., Bajcsy, R.: 3D teleimmersion for collaboration and interaction of geographically distributed users. Virtual Reality 17(1), 29–43 (2013)
Maimone, A., Fuchs, H.: Encumbrance-free telepresence system with real-time 3D capture and display using commodity depth cameras. In: ISMAR, pp. 137–146 (2011)
Adcock, M., Anderson, S., Thomas, B.: Remotefusion: Real time depth camera fusion for remote collaboration on physical tasks. In: VRCAI, pp. 235–242 (2013)
Gurevich, P., Lanir, J., Cohen, B., Stone, R.: Teleadvisor: A versatile augmented reality tool for remote assistance. In: SIGCHI, pp. 619–622 (2012)
Oyekoya, O., Stone, R., Steptoe, W., Alkurdi, L., Klare, S., Peer, A., Weyrich, T., Cohen, B., Tecchia, F., Steed, A.: Supporting interoperability and presence awareness in collaborative mixed reality environments. In: VRST, pp. 165–174 (2013)
Billinghurst, M., Cheok, A., Prince, S., Kato, H.: Real world teleconferencing. IEEE Computer Graphics and Applications 22, 11–13 (2002)
Maimone, A., Yang, X., Dierk, N., State, A., Dou, M., Fuchs, H.: General-purpose telepresence with head-worn optical see-through displays and projector-based lighting. In: IEEE VR, pp. 23–26 (2013)
Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H., Davison, A.J.: Slam++: Simultaneous localisation and mapping at the level of objects. In: CVPR, pp. 1352–1359 (2013)
Newcombe, R.A., Davison, A.J., Izadi, S., Kohli, P., Hilliges, O., Shotton, J., Molyneaux, D., Hodges, S., Kim, D., Fitzgibbon, A.: Kinectfusion: Real-time dense surface mapping and tracking. In: ISMAR, pp. 127–136 (2011)
Granados, M., Hachenberger, P., Hert, S., Kettner, L., Mehlhorn, K., Seel, M.: Boolean operations on 3D selective nef complexes: Data structure, algorithms, and implementation. In: Di Battista, G., Zwick, U. (eds.) ESA 2003. LNCS, vol. 2832, pp. 654–666. Springer, Heidelberg (2003)
Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from a single depth image. In: CVPR (2011)
Baak, A., Müller, M., Bharaj, G., Seidel, H.P., Theobalt, C.: A data-driven approach for real-time full body pose reconstruction from a depth camera. In: CVPR, pp. 1092–1099 (2011)
Livingston, M., Gabbard, J., Swan, J., Edward, I., Sibley, C., Barrow, J.: Basic perception in head-worn augmented reality displays. In: Human Factors in Augmented Reality Environments, pp. 35–65. Springer, New York (2013)
Kantonen, T., Woodward, C., Katz, N.: Mixed reality in virtual world teleconferencing. In: IEEE VR, pp. 179–182 (2010)
Izadi, S., Newcombe, R.A., Kim, D., Hilliges, O., Molyneaux, D., Hodges, S., Kohli, P., Shotton, J., Davison, A.J., Fitzgibbon, A.: Kinectfusion: real-time dynamic 3d surface reconstruction and interaction. In: SIGGRAPH, pp. 23:1–23:1 (2011)
Ragan, E., Wilkes, C., Bowman, D., Hollerer, T.: Simulation of augmented reality systems in purely virtual environments. In: IEEE VR, pp. 287–288 (2009)
Lee, C., Bonebrake, S., Hollerer, T., Bowman, D.: A replication study testing the validity of ar simulation in vr for controlled experiments. In: ISMAR, pp. 203–204 (2009)
Whelan, T., McDonald, J., Kaess, M., Fallon, M., Johannsson, H., Leonard, J.J.: Kintinuous: Spatially extended kinectfusion. In: Workshop on RGB-D: Advanced Reasoning with Depth Cameras (2012)
Steinicke, F., Bruder, G., Jerald, J., Frenz, H., Lappe, M.: Analyses of human sensitivity to redirected walking. In: VRST, pp. 149–156 (2008)
Kulik, A., Kunert, A., Beck, S., Reichel, R., Blach, R., Zink, A., Froehlich, B.: C1x6: A stereoscopic six-user display for co-located collaboration in shared virtual environments. In: SIGGRAPH Asia, pp. 188:1–188:12 (2011)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Lehment, N.H., Tiefenbacher, P., Rigoll, G. (2014). Don’t Walk into Walls: Creating and Visualizing Consensus Realities for Next Generation Videoconferencing. In: Shumaker, R., Lackey, S. (eds) Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. VAMR 2014. Lecture Notes in Computer Science, vol 8525. Springer, Cham. https://doi.org/10.1007/978-3-319-07458-0_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-07458-0_17
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07457-3
Online ISBN: 978-3-319-07458-0
eBook Packages: Computer ScienceComputer Science (R0)