Advertisement

Performance of New Global Appearance Description Methods in Localization of Mobile Robots

  • Vicente RománEmail author
  • Luis PayáEmail author
  • María FloresEmail author
  • Sergio CebolladaEmail author
  • Óscar ReinosoEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1093)

Abstract

Autonomous robots should be able to perform localization and map creation robustly. In order to solve these problems many studies and techniques have been evaluated over the past few years. This work focuses on the use of an omnidirectional vision sensor and global appearance techniques to describe each image. Global-appearance techniques consist in obtaining a unique vector that describes globally the panoramic image. Once the images have been described the mobile robot can use these descriptors both to create a map of the environment or to estimate its position and orientation in the environment. The main objective of this work is to propose and test new alternatives to describe scenes globally. The results will be used to propose new robust methods to estimate the position and orientation of the robot, from the combination of several measurements of similitude of visual information. Therefore, the present work is an initial study towards a new localization method. In this initial study a comparative the previous and the new methods is performed. The experiments will be carried out with real images that have been taken in an heterogeneous scenario where simultaneously humans and robots work together. For this reason, variations of the lighting conditions, people who occlude the scene and changes on the furniture may appear.

Keywords

Localization Mobile robots Global appearance descriptors Omnidirectional images 

Notes

Acknowledgements

This work has been supported by the Generalitat Valenciana through grants ACIF/2018/224 and ACIF/2017/146 and through the project AICO/2019/031: “Creación de modelos jerárquicos y localización robusta de robots móviles en entornos sociales”.

References

  1. 1.
    Angeli, A., Doncieux, S., Meyer, J.A., Filliat, D.: Visual topological slam and global localization. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 4300–4305. IEEE (2009)Google Scholar
  2. 2.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  3. 3.
    Berenguer, Y., Payá, L., Peidró, A., Gil, A., Reinoso, O.: Nearest position estimation using omnidirectional images and global appearance descriptors. In: Robot 2015: Second Iberian Robotics Conference, pp. 517–529. Springer (2016)Google Scholar
  4. 4.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893, June 2005Google Scholar
  5. 5.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  6. 6.
    Menegatti, E., Maeda, T., Ishiguro, H.: Image-based memory for robot navigation using properties of omnidirectional images. Robot. Auton. Syst. 47(4), 251–267 (2004). http://www.sciencedirect.com/science/article/pii/S0921889004000582CrossRefGoogle Scholar
  7. 7.
    Murillo, A.C., Guerrero, J.J., Sagues, C.: SURF features for efficient robot localization with omnidirectional images. In: 2007 IEEE International Conference on Robotics and Automation, pp. 3901–3907. IEEE (2007)Google Scholar
  8. 8.
    Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001)CrossRefGoogle Scholar
  9. 9.
    Oliva, A., Torralba, A.: Building the gist of a scene: the role of global image features in recognition. Prog. Brain Res. 155, 23–36 (2006)CrossRefGoogle Scholar
  10. 10.
    Payá, L., Fernández, L., Reinoso, Ó., Gil, A., Úbeda, D.: Appearance-based dense maps creation-comparison of compression techniques with panoramic images. In: ICINCO-RA, pp. 250–255 (2009)Google Scholar
  11. 11.
    Payá, L., Reinoso, O., Berenguer, Y., Úbeda, D.: Using omnidirectional vision to create a model of the environment: a comparative evaluation of global-appearance descriptors. J. Sens. (2016)Google Scholar
  12. 12.
    Pronobis, A., Caputo, B.: COLD: COsy localization database. Int. J. Robot. Res. (IJRR) 28(5), 588–594 (2009). http://www.pronobis.pro/publications/pronobis2009ijrr CrossRefGoogle Scholar
  13. 13.
    Radon, J.: 1.1 über die bestimmung von funktionen durch ihre integralwerte längs gewisser mannigfaltigkeiten. Classic Papers Mod. Diagn. Radiol. 5, 21 (2005)Google Scholar
  14. 14.
    Román, V., Payá, L., Reinoso, Ó.: Evaluating the robustness of global appearance descriptors in a visual localization task, under changing lighting conditions. In: ICINCO-RA, pp. 258–265 (2018)Google Scholar
  15. 15.
    Saito, M., Kitaguchi, K.: Appearance based robot localization using regression models. IFAC Proc. Vol. 39(16), 584–589 (2006)CrossRefGoogle Scholar
  16. 16.
    Siagian, C., Itti, L.: Biologically inspired mobile robot vision localization. IEEE Trans. Rob. 25(4), 861–873 (2009)CrossRefGoogle Scholar
  17. 17.
    Sturm, P., Ramalingam, S., Tardif, J.P., Gasparini, S., Barreto, J., et al.: Camera models and fundamental concepts used in geometric computer vision. Found. Trends® Comput. Graph. Vis. 6(1–2), 1–183 (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Engineering Systems and Automation DepartmentMiguel Hernandez UniversityAlicanteSpain

Personalised recommendations