3D Computer Vision: From Points to Concepts
The emergence of cheap structured light sensors, like the Kinect, opened the door to an increased interest in all matters related to the processing of 3D visual data. Applications for these technologies are abundant, from robot vision to 3D scanning. In this paper we go through the main steps used on a typical 3D vision system, from sensors and point clouds up to understanding the scene contents, including key point detectors, descriptors, set distances, object recognition and tracking and the biological motivation for some of these methods. We present several approaches developed at our lab and some current challenges.
KeywordsPoint Cloud Convolutional Neural Network Transfer Learning Scene Point Voxel Grid
- 1.Alexandre, L.A.: 3D descriptors for object and category recognition: a comparative evaluation. In: Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, October 2012Google Scholar
- 3.Alexandre, L.A.: 3D object recognition using convolutional neural networks with transfer learning between input channels. In: Menegatti, E., Michael, N., Berns, K., Yamaguchi, H. (eds.) Intelligent Autonomous Systems 13. AISC, pp. 889–898. Springer, Heidelberg (2014) Google Scholar
- 4.Del Moral, P.: Mean field simulation for monte carlo integration. Chapman and Hall/CRC, Boca Raton (2013)Google Scholar
- 5.Filipe, S., Alexandre, L.: Pfbik-tracking: Particle filter with bio-inspired keypoints tracking. In: 2014 IEEE Symposium on Computational Intelligence for Multimedia. Signal and Vision Processing (CIMSIVP), pp. 1–8, Florida, USA, December 2014Google Scholar