Advertisement

3D Sensing Techniques for Multimodal Data Analysis and Integration in Smart and Autonomous Systems

  • Zhenyu Fang
  • He Sun
  • Jinchang Ren
  • Huimin Zhao
  • Sophia Zhao
  • Stephen Marshall
  • Tariq Durrani
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 463)

Abstract

For smart and autonomous systems, 3D positioning and measurement is essential as the precision can severely affect the applicability of the techniques for a number of applications. In this paper, we summarize and compare different techniques and sensors that can be potentially used in multimodal data analysis and integration. These will provide useful guidance for the design and implementation of relevant systems.

Keywords

3D positioning and measurement Multimodal data analysis Depth camera LiDAR SAR Ultrasonic 

Notes

Acknowledgement

This work was supported by the National Natural Science Foundation of China (61672008), Guangdong Provincial Application-oriented Technical Research and Development Special fund project (2016B010127006, 2015B010131017), the Natural Science Foundation of Guangdong Province (2016A030311013, 2015A030313672), and International Scientific and Technological Cooperation Projects of Education Department of Guangdong Province (2015KGJHZ021).

References

  1. 1.
    Han, J., et al.: Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)Google Scholar
  2. 2.
    Fanello, S.R., et al.: HyperDepth: learning depth from structured light without matching. In: Proceedings of the IEEE Conference on CVPR, pp. 5441–5450 (2016)Google Scholar
  3. 3.
    Zanuttigh, P., et al.: Time-of-Flight and Structured Light Depth Cameras. Springer, Heidelberg (2016)Google Scholar
  4. 4.
    Ke, F., et al.: A flexible and high precision calibration method for the structured light vision system. Optik-Int. J. Light Electron Opt. 127(1), 310–314 (2016)Google Scholar
  5. 5.
    Ren, M., et al.: Novel projector calibration method for monocular structured light system based on digital image correlation. Optik 132, 337–347 (2017)Google Scholar
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
    Ghamisi, P., et al.: LiDAR data classification using extinction profiles and a composite Kernel support vector machine. IEEE Geosci. Remote Sens. Lett. 14, 659–663 (2017)Google Scholar
  11. 11.
    Fersch, T., et al.: A CDMA modulation technique for automotive time-of-flight LiDAR systems. IEEE Sens. J. 17, 3507–3516 (2017)Google Scholar
  12. 12.
    Kang, Z., et al.: A bayesian-network-based classification method integrating airborne LiDAR data with optical images. IEEE J-STARS 10, 1651–1661 (2016)Google Scholar
  13. 13.
    Altmann, Y., et al.: Robust spectral unmixing of sparse multispectral Lidar waveforms using gamma Markov random fields. arXiv preprint arXiv:1610.04107 (2016)
  14. 14.
    Martín, A.J., et al.: EMFi-based ultrasonic sensory array for 3D localization of reflectors using positioning algorithms. IEEE Sens. J. 15(5), 2951–2962 (2016)Google Scholar
  15. 15.
    Paajanen, M., et al.: ElectroMechanical Film (EMFi)—a new multipurpose electret material. Sens. Actuators A: Phys. 84(1), 95–102 (2000)Google Scholar
  16. 16.
    Khyam, M.O., Pickering, M.R., et al.: Pseudo-orthogonal chirp-based multiple ultrasonic transducer positioning. IEEE Sens. J. 17, 3832–3843 (2017)Google Scholar
  17. 17.
    Khyam, M.O., et al.: High-precision OFDM-based multiple ultrasonic transducer positioning using a robust optimization approach. IEEE Sens. J. 16(13), 5325–5336 (2016)Google Scholar
  18. 18.
    Chen, C., et al.: Real-time human action recognition based on depth motion maps. J. Real-time Image Process. 12(1), 155–163 (2016)Google Scholar
  19. 19.
    Corti, A., et al.: A metrological characterization of the Kinect V2 time-of-flight camera. Robot. Auton. Syst. 75, 584–594 (2016)Google Scholar
  20. 20.
    Supancic, J.S., et al.: Depth-based hand pose estimation: data, methods, and challenges. In: Proceedings of the IEEE International Conference on CV, pp. 1868–1876 (2016)Google Scholar
  21. 21.
  22. 22.
    Das, R., et al.: GeroSim: a simulation framework for gesture driven robotic arm control using Intel RealSense. In: IEEE International Conference on ICPEICES, pp. 1–5, 4 July 2016Google Scholar
  23. 23.
    Lan, Y., et al.: Data fusion-based real-time hand gesture recognition with Kinect V2. In: 9th International Conference on Human System Interactions (HSI). IEEE (2016)Google Scholar
  24. 24.
  25. 25.
    Chen, L., et al.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013)Google Scholar
  26. 26.
    Allodi, M., et al.: Machine learning in tracking associations with stereo vision and lidar observations for an autonomous vehicle. In: Intelligent Vehicles Symposium. IEEE (2016)Google Scholar
  27. 27.
    Yao, Y., et al.: Integration of indoor and outdoor positioning in a three-dimension scene based on LIDAR and GPS signal. In: Proceedings of the 2nd IEEE ICCC Conference, pp. 1772–1776 (2016)Google Scholar
  28. 28.
    Nakajima, K., et al.: 3D environment mapping and self-position estimation by a small flying robot mounted with a movable ultrasonic range sensor. J. Elect. Sys. Inf. Tech. 4, 289–298 (2017)Google Scholar
  29. 29.
    Anghel, A., et al.: Combining spaceborne SAR images with 3D point clouds for infrastructure monitoring applications. ISPRS J. Photogramm. Remote Sens. 111, 45–61 (2016)Google Scholar
  30. 30.
    Raucoules, D., et al.: Time-variable 3D ground displacements from high-resolution synthetic aperture radar (SAR). Remote Sens. Environ. 139, 198–204 (2013)Google Scholar
  31. 31.
    Nitti, D.O., et al.: Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 15(8), 18334–18359 (2015)Google Scholar
  32. 32.
    Penner, J.F., et al.: Ground-based 3D radar imaging of trees using a 2D synthetic aperture. Electronics 6(1), 11 (2017)Google Scholar
  33. 33.
    Basaca-Preciado, L.C., et al.: Optical 3D laser measurement system for navigation of autonomous mobile robot. Opt. Lasers Eng. 54, 159–169 (2015)Google Scholar
  34. 34.
  35. 35.
    Wu, Z., et al.: A novel stereo positioning method based on optical and SAR sensor. In: IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 339–342. IEEE (2014)Google Scholar
  36. 36.
    Ren, J., et al.: A general framework for 3D soccer ball estimation and tracking. In: 2004 International Conference on Image Processing, ICIP 2004, vol. 3, pp. 1935–1938 (2004)Google Scholar
  37. 37.
    Ren, J., et al.: Tracking the soccer ball using multiple fixed cameras. Comput. Vis. Image Underst. 113(5), 633–642 (2009)Google Scholar
  38. 38.
    Feng, Y., et al.: Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications. IEEE Trans. Broadcast. 57(2), 500–509 (2011)Google Scholar
  39. 39.
    Ren, J., et al.: Real-time modeling of 3-D soccer ball trajectories from multiple fixed cameras. IEEE Trans. Circ. Syst. Video Technol. 18(3), 350–362 (2008)Google Scholar
  40. 40.
    Ren, J., et al.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010)Google Scholar
  41. 41.
    Liu, Z., et al.: Template deformation based 3D reconstruction of full human body scans from low-cost depth cameras. IEEE Trans. Cybern. 47(3), 695–708 (2017)Google Scholar
  42. 42.
    Ren, J., et al.: Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection. IET Image Process. 4(4), 294–301 (2010)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Zhenyu Fang
    • 1
  • He Sun
    • 1
  • Jinchang Ren
    • 1
  • Huimin Zhao
    • 2
    • 3
  • Sophia Zhao
    • 1
  • Stephen Marshall
    • 1
  • Tariq Durrani
    • 1
  1. 1.Department of Electronic and Electrical EngineeringUniversity of StrathclydeGlasgowUK
  2. 2.School of Computer ScienceGuangdong Polytechnic Normal UniversityGuangzhouChina
  3. 3.The Guangzhou Key Laboratory of Digital Content Processing and Security TechnologiesGuangzhouChina

Personalised recommendations