Advertisement

Methods of Mobile Robot Visual Navigation and Environment Mapping

  • J. S. PershinaEmail author
  • S. Ya. Kazdorf
  • A. V. LopotaEmail author
Analysis and Synthesis of Signals and Images
  • 3 Downloads

Abstract

State-of-the-art methods of visual navigation for mobile robots are considered. A hierarchical representation structure of the environment corresponding to the hierarchical organization of the mobile robot control system is proposed. State-of-the-art approaches to constructing map models are presented. Their development will bring the navigation system closer to that formed by the human intellect and combines vision and a semantic view of the world within cognitive maps.

Keywords

visual navigation convolutional neural networks semantic segmentation cognitive map 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    S. A. Polovko, E. Yu. Smirnova, and E. I. Yurevich, “Quality Control of Mobile Rbots,” Robototekhnika i Tekhnicheskaya Kibernetika 4(3), 30–33 (2014).Google Scholar
  2. 2.
    A. V. Lopota, S. A. Polovko, E. Yu. Smirnova, and M. N. Plavinskii, “Main Results and Promising Areas of Research in the Field of Navigation and Control of Mobile Robotic Complexes,” Issledovaniya Naukograda 4(2), 49–53 (2013).Google Scholar
  3. 3.
    A. J. Davison and I. D. Reid, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Trans. Pattern Analysis and Machine Intell., No. 6, 1052–1067 (2007).Google Scholar
  4. 4.
    R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robotics 31(5), 1147–1163 (2015).CrossRefGoogle Scholar
  5. 5.
    J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Proc. of the Eur. Conf. on Computer Vision (ECCV 2014), Zurich, Switzerland, 6–12 Sept., 2014, pp. 834–849.Google Scholar
  6. 6.
    S. L. Bowman, N. Atanasov, K. Daniilidis, and G. J. Pappas, “Probabilistic Data Association for Semantic Slam,” in Proc. of the IEEE Intern. Conf. on Robotics and Automation (ICRA 2017), Singapore, Singapore, 29 May–3 June, 2017, pp. 1722–1729.Google Scholar
  7. 7.
    J. Civera, D. Galvez-Lopez, L. Riazuelo, et al., “Towards Semantic SLAM Using a Monocular Camera,” in Proc. of the IEEE/RSJ Intern. Conf. on Intelligent Robots and Systems (IROS 2011), San Francisco, USA, 25–30 Sept., 2011, pp. 1277–1284.Google Scholar
  8. 8.
    G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” in Proc. of the 6th IEEE and ACM Intern. Symposium on Mixed and Augmented Reality (ISMAR 2007), Nara, Japan, 13–16 Nov., 2007, pp. 225–234.Google Scholar
  9. 9.
    R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, “Dense Tracking and Mapping in Real-Time,” in Proc. of the IEEE Intern. Conf. on Computer Vision (ICCV 2011), Barcelona, Spain, 6–13 Nov., 2011, pp. 2320–2327.Google Scholar
  10. 10.
    A. A. Kiril’chenko, A. K. Platonov, and S. M. Sokolov, “Theoretical Aspects of the Organization of Interpretive Navigation of Mobile Robots,” Preprint No. 5 (Keldysh Institute of Applied Mathematics, Moscow, 2002) [in Russian].Google Scholar
  11. 11.
    F. Chaumette and S. Hutchinson, “Visual Servo Control. I. Basic Approaches,” IEEE Robot. Automat. Mag. 13(4), 82–90 (2006).CrossRefGoogle Scholar
  12. 12.
    B. Jia and S. Liu, “Switched Visual Servo Control of Nonholonomic Mobile Robots with Field-of-View Constraints Based on Homography,” Control Theory Technol. 13(4), 311–320 (2015).MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    D. A. Yudin, V. V. Protsenko, G. G. Postolskii, et al., “Vision System for Automatic Orientation and Positioning of Mobile Robots,” Robototekhnika i Tekhnicheskaya Kibernetika 2(1), 70–75 (2014).Google Scholar
  14. 14.
    D. A. Baramiya, M. S. D’yakov, S. A. Kuzikovskii, and M. M. Lavrentyev, “Simultaneous Localization and Mapping System on the Basis of a CoreSLAM Approach,” Avtometriya 53(6), 77–82 (2017) [Optoelectron., Instrum. Data Process. 53 (6), 599–603 (2017)].Google Scholar
  15. 15.
    K. S. Yakovlev, V. V. Khit’kov, M. I. Loginov, and A. V. Petrov, “Marker-Based Navigation System for a Group of UAVs,” Robototekhnika i Tekhnicheskaya Kibernetika 5(4), 44–48 (2014).Google Scholar
  16. 16.
    H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up Robust Features,” in Proc. of the Eur. Conf. on Computer Vision (ICCV 2006), Graz, Austria, 7–13 May, 2006, pp. 404–417.Google Scholar
  17. 17.
    D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Proc. of the 7th IEEE Intern. Conf. on Computer Vision (ICCV 1999), Kerkyra, Greece, 20–27 Sept., 1999, Vol. 2, pp. 1150–1157.Google Scholar
  18. 18.
    M. Calonder, V. Lepetit, C. Strecha, et al., “Brief: Binary Robust Independent Elementary Features,” in Proc. of the Eur. Conf. on Computer Vision (ICCV 2010), Crete, Greece, 5–11 Sept., 2010, pp. 778–792.Google Scholar
  19. 19.
    E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An Efficient Alternative to SIFT or SURF,” in Proc. of the IEEE Intern. Conf. on Computer Vision (ICCV 2011), Barcelona, Spain, 6–13 Nov., 2011, pp. 2564–2571.Google Scholar
  20. 20.
    M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Commun. ACM 24(6), 381–395 (1981).MathSciNetCrossRefGoogle Scholar
  21. 21.
    D. Nistér and O. Naroditsky, J. Bergen, “Visual Odometry for Ground Vehicle Applications,” J. Field Robotics 23(1), 3–20 (2006).CrossRefzbMATHGoogle Scholar
  22. 22.
    I. M. Makarov, “Principles of Building Intelligent Controllers and Control Systems for Weapons and Military Equipment,” Izv. YuFU. Tekhnicheskie Nauki 58(3), 3–7 (2006).MathSciNetGoogle Scholar
  23. 23.
    E. C. Tolman, “Cognitive Maps in Rats and Men,” Psych. Rev. 55(4), 189 (1948).CrossRefGoogle Scholar
  24. 24.
    S. Vasudevan, S. Gächter, V. Nguyen, and R. Siegwart, “Cognitive Maps for Mobile Robots an Object Based Approach,” Robotics and Autonomous Syst. 55(5), 359–371 (2007).CrossRefGoogle Scholar
  25. 25.
    A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, et al., “A Review on Deep Learning Techniques Applied to Semantic Segmentation,” Comp. Vis. Pattern Recogn. Cornell Univers., 2017. https://arXiv preprint arXiv:1704.06857.Google Scholar
  26. 26.
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” in Proc. of the 26th Conf. on Neural Information Processing Systems, Lake Tahoe, USA, 3–8 Dec., 2012, pp. 1097–1105.Google Scholar
  27. 27.
    K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Comp. Vis. Pattern Recogn. Cornell Univers., 2015. https://arXiv preprint arXiv:1409.1556.Google Scholar
  28. 28.
    C. Szegedy, W. Liu, Y. Jia, and P. Sermanet, “Going Deeper with Convolutions,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2015), Boston, USA, 7–12 June, 2015, pp. 1–9.Google Scholar
  29. 29.
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2015), Boston, USA, 7–12 June, 2016, pp. 770–778.Google Scholar
  30. 30.
    S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in Proc. of the 29th Conf. on Neural Information Processing Systems, Montreal, Canada, 7–12 Dec., 2015, pp. 91–99.Google Scholar
  31. 31.
    W. Liu, D. Anguelov, D. Erhan, and C. Szegedy, “Ssd: Single Shot Multibox Detector,” in Proc. of the Eur. Conf. on Computer Vision (ICCV 2016), Amsterdam, The Netherlands, 8–16 Oct., 2016, pp. 21–37.Google Scholar
  32. 32.
    J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2016), Las Vegas, USA, 27–30 June, 2016, pp. 779–788.Google Scholar
  33. 33.
    J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2015), Boston, USA, 7–12 June, 2015, pp. 3431–3440.Google Scholar
  34. 34.
    O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional Networks for Biomedical Image Segmentation,” in Proc. of the Intern. Conf. on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 Oct., 2015, pp. 234–241.Google Scholar
  35. 35.
    V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Analysis Machine Intell. 39(12), 2481–2495 (2017).CrossRefGoogle Scholar
  36. 36.
    P. O. Pinheiro, R. Collobert, and P. Dollár, “Learning to Segment Object Candidates,” in Proc. of the 29th Conf. on Neural Information Processing Systems, Montreal, Canada, 7–12 Dec., 2015, pp. 1990–1998.Google Scholar
  37. 37.
    A. S. Yushchenko, “Mobile Robot Routing under Conditions of Uncertainty,” Mekhatronika, Avtomatizatsiya, Upravlenie, No. 1, 31–38 (2004).Google Scholar
  38. 38.
    S. Thrun, W. Burgard, and D. Fox, “A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots,” Autonomous Robots 5(3–4), 253–271 (1998).CrossRefzbMATHGoogle Scholar
  39. 39.
    S. Song, F. Yu, A. Zeng, and A. X. Chang, “Semantic Scene Completion from a Single Depth Image,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, USA, 21–26 July, 2017, pp. 190–198.Google Scholar
  40. 40.
    A. Dai and M. Nießner, “3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation,” Comp. Vis. Pattern Recogn. Cornell Univers., 2018. https://arXiv preprint arXiv:1803.10409..Google Scholar
  41. 41.
    I. Cherabier, J. L. Schonberger, M. R. Oswald, et al., “Learning Priors for Semantic 3D Reconstruction,” in Proc. of the Eur. Conf. on Computer Vision (ECCV 2018), Munich, Germany, 8–14 Sept., 2018. Vol. 2, pp. 314–330.Google Scholar
  42. 42.
    A. Dai, A. X. Chang, M. Savva, and M. Halber, “ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, USA, 21–26 July, 2017, Vol. 2, pp. 2432–2443.CrossRefGoogle Scholar
  43. 43.
    F. J. Lawin, M. Danelljan, and P. Tosteberg, “Deep Projective 3D Semantic Segmentation,” in Proc. of the Intern. Conf. on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 Aug., 2017, pp. 95–107.Google Scholar
  44. 44.
    J. McCormac, A. Handa, A. Davison, and S. Leutenegger, “Semanticfusion: Dense 3d Semantic Mapping with Convolutional Neural Networks,” in Proc. of the IEEE Intern. Conf. on Robotics and Automation (ICRA 2017), Singapore, Singapore, 29 May–3 June, 2017, pp. 4628–4635.Google Scholar
  45. 45.
    C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep Learning on Point Sets for 3d Classification and Segmentation,” in Proc. of the IEEE Conf. Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, USA, 21–26 July, 2017, pp. 77–85.Google Scholar
  46. 46.
    B. S. Hua, M. K. Tran, and S. K. Yeung, “Pointwise Convolutional Neural Networks,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, USA, 18–23 June, 2018, pp. 984–993.Google Scholar
  47. 47.
    C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” in Proc. of the 29th Conf. on Neural Information Processing Systems, Long Beach, USA, 4–9 Dec., 2017, pp. 5099–5108.Google Scholar
  48. 48.
    Q. H. Pham, B. S. Hua, D. T. Nguyen, and S. K. Yeung, “Real-Time Progressive 3D Semantic Segmentation for Indoor Scene,” Comp. Vis. Pattern Recogn. Cornell Univers., 2017. https://arXiv preprint arXiv:1804.00257.Google Scholar

Copyright information

© Allerton Press, Inc. 2019

Authors and Affiliations

  1. 1.Novosibirsk State Technical UniversityNovosibirskRussia
  2. 2.Central Research Institute of Robotics and Technical CyberneticsSt. PetersburgRussia

Personalised recommendations