Advertisement

Particle Filter for Reliable Estimation of the Ground Plane from Depth Images in a Travel Aid for the Blind

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1196)

Abstract

This paper presents a reliable method for the segmentation of image sequences of 3D scenes used in an electronic travel aid for the blind. We propose an implementation of the particle filtering (PF) algorithm to estimate and track the orientation and position of the ground plane free of obstacles. We explain how the state vector in the PF algorithm was defined, and verify the results on a large set of indoor and outdoor sequences, shot by a moving stereovision camera. The movement of the camera is not restricted, and is capable of six degrees of freedom (DoF). We show that ground plane orientation root mean square (RMS) error estimates do not exceed 2 or 3°, for the roll and pitch angles, respectively. The overlap between the mean values for the Jaccard similarity coefficient of the detected ground-truth ground plane regions and the ground plane regions was 0.94. Although the method was developed for use in an electronic travel aid for the blind, it could also find applications in the automatic navigation of autonomous vehicles and unmanned aerial vehicles (UAVs).

Keywords

Object tracking Particle filtering Depth images 

References

  1. 1.
    Hersh, M., Johnson, M.: Assistive Technology for Visually Impaired and Blind People. Springer, London (2008).  https://doi.org/10.1007/978-1-84628-867-8
  2. 2.
    Maidenbaum, S., Abboud, S., Amedi, A.: Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 14, 3–15 (2014)CrossRefGoogle Scholar
  3. 3.
    Dakopoulos, D., Bourbakis, N.G.: Wearable obstacle avoidance electronic travel aids for blind: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 40(1), 25–35 (2010)CrossRefGoogle Scholar
  4. 4.
    Matusiak, K., Skulimowski, P., and Strumillo, P.: Object recognition in a mobile phone application for visually impaired users. In 2013 6th International Conference on Human System Interactions (HSI), pp. 479–484 (2013)Google Scholar
  5. 5.
    Yang, X., Yuan, S., Tian, Y.: Assistive clothing pattern recognition for visually impaired people. IEEE Trans. Hum. Mach. Syst. 44(2), 234–243 (2014)CrossRefGoogle Scholar
  6. 6.
    Tapu, R., Mocanu, B., and Zaharia, T.: Wearable assistive devices for visually impaired: a state of the art survey. Pattern Recogn. Lett. (2018).  https://doi.org/10.1016/j.patrec.2018.10.031
  7. 7.
    Strumillo, P., Bujacz, M., Baranski, P., Skulimowski, P., Korbel, P., Owczarek, M., Tomalczyk, K., Moldoveanu, A., Unnthorsson, R.: Different approaches to aiding blind persons in mobility and navigation in the “Naviton” and “Sound of Vision” projects. In: Pissaloux, E.E., Velazquez, R. (eds.) Mobility in Visually Impaired People - Fundamentals and ICT Assistive Technologies. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-54446-5_15
  8. 8.
    Picton, P., Capp, M.: Relaying scene information to the blind via sound using cartoon depth maps. Image Vis. Comput. 24(4), 570–577 (2007)CrossRefGoogle Scholar
  9. 9.
    Costa, P., Fernandes, H., Martins, P., Barroso, J., Hadjileontiadis, L.J.: Obstacle detection using stereo imaging to assist the navigation of visually impaired people. Procedia Comput. Sci. 14, 83–93 (2012). Proceedings of the 4th International Conference on Software Development for Enhancing Accessibility and Fighting Infoexclusion (2012)Google Scholar
  10. 10.
    Herghelegiu, P., Burlacu, A., Caraiman, S.: Negative obstacle detection for wearable assistive devices for visually impaired. In: 2017 21st International Conference on System Theory, Control and Computing (ICSTCC), pp. 564–570 (2017)Google Scholar
  11. 11.
    Caraiman, S., Morar, A., Owczarek, M., Burlacu, A., Rzeszotarski, D., Botezatu, N., Herghelegiu, P., Moldoveanu, F., Strumillo, P., Moldoveanu, A.: Computer vision for the visually impaired: the sound of vision system. In: The IEEE International Conference on Computer Vision (ICCV) (2017)Google Scholar
  12. 12.
    Kim, J.: Obstacle Detection. Source code available at: https://github.com/joowonkim/ Obstacle detection (2016). Accessed 10 Jan 2019
  13. 13.
    Kang, B., Lee, Y., Nguyen, T.Q.: Depth-adaptive deep neural network for semantic segmentation. IEEE Trans. Multimedia 20(9), 2478–2490 (2018)CrossRefGoogle Scholar
  14. 14.
    Caraiman, S., Zvoristeanu, O., Burlacu, A., Herghelegiu, P.: Stereo vision based sensory substitution for the visually impaired. Sensors 19, 2771 (2019)CrossRefGoogle Scholar
  15. 15.
    Liu, J.S., Chen, R.: Sequential monte carlo methods for dynamic systems. J. Am. Stat. Assoc. 93, 1032–1044 (1998)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Kwon, J., Dragon, R., Gool, L.V.: Joint tracking and ground plane estimation. IEEE Signal Process. Lett. 23(11), 1514–1517 (2016)CrossRefGoogle Scholar
  17. 17.
    Ernerfeldt, E.: Fitting a plane to noisy points in 3D (2017). www.ilikebigbits.com/2017_09_25_plane_from_points_2.html. Accessed 10 Jan 2019
  18. 18.
    Skulimowski, P., Owczarek, M., Radecki, A., Bujacz, M., Rzeszotarski, D., Strumillo, P.: Interactive sonification of U-depth images in a navigation aid for the visually impaired. J. Multimod. User Interfaces 13, 219–23 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Institute of ElectronicsLodz University of TechnologyLodzPoland

Personalised recommendations