Skip to main content
Log in

Detecting rigid links between sensors for automatic sensor space alignment in virtual environments

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

Simultaneous use of multiple sensor systems provides improved accuracy and tracking range compared to use of a single sensor system for virtual reality applications. However, calibration of multiple sensor technologies is non-trivial and at a minimum will require significant, and likely regular, user actioned calibration procedures. To enable ambient sensor calibration, we present techniques for automatically identifying relations between rigidly linked 6DoF and 3DoF sensors belonging to different sensor systems for body tracking. The techniques allow for subsequent automatic alignment of the sensor systems. Two techniques are presented, analysed in simulation for performance under varying noise and latency conditions, and are applied to two case studies. The first study identified sensors tracked by a gold standard rigid body tracker with one of six rigid bodies tracked by the first generation Kinect sensor with each sensor identified correctly in at least 76% of estimates. The second case study was an interactive version of the system that can detect a change in sensor configuration in 1–2 s and only requires movements of less than 15 cm or \(90^\circ\). Our methods represent a key step in creating highly accessible multi-device 3D virtual environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. Constellation is the positional tracking system used with the Oculus Rift and Touch systems, see http://www.oculus.com—accessed 04/01/2018.

  2. The Lighthouse tracking system comprises of the two base stations for use with the HTC Vive virtual reality headset, see http://www.vive.com—accessed 04/01/2018.

  3. See http://neuronmocap.com/products/perception_neuron—accessed 04/01/2018.

  4. The Microsoft Kinect is a consumer grade infrared structured light depth camera originally developed for gaming applications (http://dev.windows.com/en-us/kinect—accessed 04/01/2018).

  5. In real sensors, there will likely be different latency and sampling rates between the sensors. Here, we consider that data, for each time step t, is sampled in synchronisation with the lowest update rate and using the latest data from the faster sensors. Also we assume approximately synchronous samples.

  6. SO(3), or special orthogonal group, is the group that represents rotations in 3-dimensional Euclidean space \(\mathbb {R}^3\).

  7. SE(3), or special Euclidean group, is the symmetry group of 3-dimensional Euclidean space.

  8. This is not actually a norm as \(\angle (\mathbf {R})\) fails the triangle inequality and multiplicativity due to its periodicity. However, definiteness and positivity are the only properties needed for the purpose of measuring the size of the rotation of a matrix.

  9. OpenNI is an open source API for depth cameras and other natural input devices (http://github.com/OpenNI/OpenNI—accessed 04/01/2018).

  10. http://www.optitrack.com—accessed 04/01/2018.

  11. NiTE open source tracking middleware: http://openni.ru/files/nite/ (accessed 04/01/2018).

  12. A beta Unreal Engine 4 plug-in for skeleton-centered virtual reality sensor fusion based on this work is available at http://github.com/JakeFountain/Spooky—accessed 08/01/2018.

References

  • Bahle G, Lukowicz P, Kunze K, Kise K (2013) I see you: how to improve wearable activity recognition by leveraging information from environmental cameras. In: 2013 IEEE international conference on pervasive computing and communications workshops (PERCOM workshops), pp 409–412

  • Banos O, Calatroni A, Damas M, Pomares H, Rojas I, Sagha H, del R Milln J, Tröster G, Chavarriaga R, Roggen D (2012) Kinect= IMU? Learning MIMO signal mappings to automatically translate activity recognition systems across sensor modalities. In: 2012 16th international symposium on wearable computers (ISWC). IEEE, pp 92–99

  • Calatroni A, Roggen D, Tröster G (2010) A methodology to use unknown new sensors for activity recognition by leveraging sporadic interactions with primitive sensors and behavioral assumptions. Eidgenössische Technische Hochschule Zürich, D-ITET, Institut für Elektronik

  • Chavarriaga R, Bayati H, Milln JdR (2013) Unsupervised adaptation for acceleration-based activity recognition: robustness to sensor displacement and rotation. Pers Ubiquitous Comput 17(3):479–490

    Article  Google Scholar 

  • Deng S, Jiang N, Chang J, Guo S, Zhang JJ (2017) Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation. Int J Human-Comput Stud 105(Supplement C):68–80

    Article  Google Scholar 

  • Destelle F, Ahmadi A, O’Connor N, Moran K, Chatzitofis A, Zarpalas D, Daras P (2014) Low-cost accurate skeleton tracking based on fusion of kinect and wearable inertial sensors. In: 2014 Proceedings of the 22nd European signal processing conference (EUSIPCO), pp 371–375

  • Dornaika F, Horaud R (1998) Simultaneous robot-world and hand-eye calibration. IEEE Trans Robot Autom 14:617–622

    Article  Google Scholar 

  • Forster K, Roggen D, Tröster G (2009) Unsupervised classifier self-calibration through repeated context occurences: is there robustness against sensor displacement to gain? In: International symposium on wearable computers (ISWC), pp 77–84

  • Fountain J, Smith SP (2016) Automatic identification of rigidly linked 6DoF sensors. In: IEEE virtual reality 2016. IEEE, pp 175–176

  • Fountain J, Smith SP (2017) Real-time ambient fusion of commodity tracking systems for virtual reality. In: International conference on artificial reality and telexistence and eurographics symposium on virtual environments (ICAT-EGVE 2017). Eurographics Association, pp 1–8

  • Gottschalk S, Hughes JF (1993) Autocalibration for virtual environment tracking hardware. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques. ACM, New York, NY, USA, SIGGRAPH ’93, pp 65–72

  • Kunze K, Lukowicz P (2008) Dealing with sensor displacement in motion-based onbody activity recognition systems. In: Proceedings of the 10th international conference on ubiquitous computing. ACM, pp 20–29

  • Kunze K, Lukowicz P, Junker H, Trster G (2005) Where am I: recognizing on-body positions of wearable sensors. In: Strang T, Linnhoff-Popien C (eds) Location- and context-awareness, no. 3479 in Lecture notes in computer science. Springer, Berlin, pp 264–275

  • Kunze K, Lukowicz P, Partridge K, Begole B (2009) Which way am I facing: inferring horizontal device orientation from an accelerometer signal. In: International symposium on wearable computers, 2009. ISWC ’09, pp 149–150

  • LaViola JJ Jr, Kruijff E, McMahan RP, Bowman DA, Poupyrev I (2017) 3D user interfaces: theory and practice, 2nd edn. Addison-Wesley, Boston

    Google Scholar 

  • Lester J, Hannaford B, Borriello G (2004) “Are You with Me?”—Using accelerometers to determine if two devices are carried by the same person. In: Ferscha A, Mattern F (eds) Pervasive computing. Springer, Berlin, pp 33–50

    Chapter  Google Scholar 

  • Li A, Wang L, Wu D (2010) Simultaneous robot-world and hand-eye calibration using dual-quaternions and Kronecker product. Int J Phys Sci 5(10):1530–1536

    Google Scholar 

  • Moser K, Itoh Y, Oshima K, Swan J, Klinker G, Sandor C (2015) Subjective evaluation of a semi-automatic optical see-through head-mounted display calibration technique. IEEE Trans Vis Comput Graph 21(4):491–500

    Article  Google Scholar 

  • Pearl T (2012) Cross-platform tracking of a 6DoF motion controller using computer vision and sensor fusion. Master’s thesis, Vienna University of Technology

  • Plopski A, Itoh Y, Nitschke C, Kiyokawa K, Klinker G, Takemura H (2015) Corneal-imaging calibration for optical see-through head-mounted displays. IEEE Trans Vis Comput Graph 21(4):481–490

    Article  Google Scholar 

  • Sanderson C (2010) Armadillo: An open source C++ linear algebra library for fast prototyping and computationally intensive experiments. Technical Report, NICTA, Australia

  • Schapansky K (2014) Jester: a device abstraction and data fusion API for skeletal tracking. Master’s Thesis, California Polytechnic State University

  • Shah M (2011) Comparing two sets of corresponding six degree of freedom data. Comput Vis Image Underst 115(10):1355–1362

    Article  Google Scholar 

  • Shah M (2013) Solving the robot-world/hand-eye calibration problem using the Kronecker product. J Mech Robot 5(3):031,007–031,007

    Article  Google Scholar 

  • Zhuang H, Roth ZS, Sudhakar R (1994) Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form AX = YB. IEEE Trans Robot Autom 10(4):549–554

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by an Australian Postgraduate Allowance Scholarship and the Newcastle Robotics Laboratory at The University of Newcastle, Australia.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shamus P. Smith.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fountain, J., Smith, S.P. Detecting rigid links between sensors for automatic sensor space alignment in virtual environments. Virtual Reality 23, 71–84 (2019). https://doi.org/10.1007/s10055-018-0341-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-018-0341-8

Keywords

Navigation