Advertisement

Noise Robustness Analysis of Point Cloud Descriptors

  • Yasir Salih
  • Aamir Saeed Malik
  • Nicolas Walter
  • Désiré Sidibé
  • Naufal Saad
  • Fabrice Meriaudeau
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8192)

Abstract

In this paper, we investigate the effect of noise on 3D point cloud descriptors. Various types of point cloud descriptors have been introduced in the recent years due to advances in computing power, which makes processing point cloud data more feasible. Most of these descriptors describe the orientation difference between pairs of 3D points in the object and represent these differences in a histogram. Earlier studies dealt with the performances of different point cloud descriptors; however, no study has ever discussed the effect of noise on the descriptors performances. This paper presents a comparison of performance for nine different local and global descriptors amidst 10 varying levels of Gaussian and impulse noises added to the point cloud data. The study showed that 3D descriptors are more sensitive to Gaussian noise compared to impulse noise. Surface normal based descriptors are sensitive to Gaussian noise but robust to impulse noise. While descriptors which are based on point’s accumulation in a spherical grid are more robust to Gaussian noise but sensitive to impulse noise. Among global descriptors, view point features histogram (VFH) descriptor gives good compromise between accuracy, stability and computational complexity against both Gaussian and impulse noises. SHOT (signature of histogram of orientations) descriptor is the best among the local descriptors and it has good performance for both Gaussian and impulse noises.

Keywords

3D descriptors features histogram noise robustness point cloud library 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lakehal, A., El Beqqali, O.: 3D shape retrieval using characteristics level images. In: Proceedings of International Conference on Multimedia Computing and Systems, pp. 302–306 (May 2012)Google Scholar
  2. 2.
    Aldoma, A., Marton, Z.-C., Tombari, F., Wohlkinger, W., Potthast, C., Zeisl, B., Rusu, R.B., Gedikli, S., Vincze, M.: Point cloud library: three-dimensional object recognition and 6 DoF pose estimation. IEEE Robotics & Automation Magazine, 80–91 (September 2012)Google Scholar
  3. 3.
    Tombari, F., Salti, S., Di Stefano, L.: Unique Signatures of Histograms for Local Surface Description. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part III. LNCS, vol. 6313, pp. 356–369. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  4. 4.
    Sukno, F.M., Waddington, J.L., Whelan, P.F.: Comparing 3D Descriptors for Local Search of Craniofacial Landmarks. In: Bebis, G., et al. (eds.) ISVC 2012, Part II. LNCS, vol. 7432, pp. 92–103. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Sun, X., Rosin, P.L., Martin, R.R., Langbein, F.C.: Noise in 3D laser range scanner data. In: Proceedings of IEEE International Conference on Shape Modeling and Applications, pp. 37–45 (2008)Google Scholar
  6. 6.
    Mériaudeau, F., Rantoson, R., Fofi, D., Stolz, C.: Review and comparison of Non-Conventional Imaging Systems for 3D Digitization of transparent objects. Journal of Electronic Imaging 21(2), 021105 (2012)Google Scholar
  7. 7.
    Khoshelham, K.: Accuracy analysis of kinect depth data. In: Proceedings of ISPRS Workshop Laser Scanning, pp. 1–6 (2011)Google Scholar
  8. 8.
    Khoshelham, K., Elberink, S.O.: Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors 12(2), 1437–1454 (2012)CrossRefGoogle Scholar
  9. 9.
    Cai, Q., Gallup, D., Zhang, C., Zhang, Z.: 3D deformable face tracking with a commodity depth camera. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part III. LNCS, vol. 6313, pp. 229–242. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Zhang, C., Zhang, Z.: Calibration between depth and color sensors for commodity depth cameras. In: Proceedings of IEEE International Conference on Multimedia and Expo, pp. 1–6 (2011)Google Scholar
  11. 11.
    Camplani, M., Salgado, L., Polit, U.: Efficient spatio-temporal hole filling strategy for kinect depth maps. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 8290, pp. 1–10 (2012)Google Scholar
  12. 12.
    Nguyen, C.V., Izadi, S., Lovell, D.: Modeling Kinect Sensor Noise for Improved 3D Reconstruction and Tracking. In: Proceedings of 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, pp. 524–530 (2012)Google Scholar
  13. 13.
    Rusu, R.B.: Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. PhD thesis, University of Munich (2010)Google Scholar
  14. 14.
    Aldoma, A., Tombari, F., Rusu, R.B., Vincze, M.: OUR-CVFH – Oriented, Unique and Repeatable Clustered Viewpoint Feature Histogram for Object Recognition and 6DOF Pose Estimation. In: Pinz, A., Pock, T., Bischof, H., Leberl, F. (eds.) DAGM/OAGM 2012. LNCS, vol. 7476, pp. 113–122. Springer, Heidelberg (2012)Google Scholar
  15. 15.
    Wohlkinger, W., Vincze, M.: Ensemble of shape functions for 3D object classification. In: Proceedings of IEEE International Conference on Robotics and Biomimetic, pp. 2987–2992 (2011)Google Scholar
  16. 16.
    Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: Proceedings of IEEE International Conference on Robotic and Automation, Shanghai, China, pp. 1–8 (May 2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yasir Salih
    • 1
    • 2
  • Aamir Saeed Malik
    • 1
  • Nicolas Walter
    • 1
  • Désiré Sidibé
    • 2
  • Naufal Saad
    • 1
  • Fabrice Meriaudeau
    • 2
  1. 1.Centre for Intelligent Signal & Imaging ResearchUniversitiTeknologi PETRONASTronohMalaysia
  2. 2.Le2i UMR CNRS 6306Université de BourgogneLe CreusotFrance

Personalised recommendations