Mobile Robot Localization through Identifying Spatial Relations from Detected Corners

  • Sergio Almansa-Valverde
  • José Carlos Castillo
  • Antonio Fernández-Caballero
  • José Manuel Cuadra Troncoso
  • Javier Acevedo-Rodríguez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6687)


In this paper, the Harris corner detection algorithm is applied to images captured by a time-of-flight (ToF) camera. In this case, the ToF camera mounted on a mobile robot is exploited as a gray-scale camera for localization purposes. Indeed, the gray-scale image represents distances for the purpose of finding good features to be tracked. These features, which actually are points in the space, form the basis of the spatial relations used in the localization algorithm. The approach to the localization problem is based on the computation of the spatial relations existing among the corners detected. The current spatial relations are matched with the relations gotten during previous navigation.


Mobile Robot Spatial Relation Localization Algorithm Robot Position Height Interval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Begum, M., Mann, G.K.I., Gosine, R.G.: Integrated fuzzy logic and genetic algorithmic approach for simultaneous localization and mapping of mobile robots. Applied Soft Computing 8, 150–165 (2008)CrossRefGoogle Scholar
  2. 2.
    Bennett, G.: Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association 57(297), 33–45 (1962)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bosse, M., Zlot, R.: Keypoint design and evaluation for place recognition in 2D lidar maps. Robotics and Autonomous Systems 57, 1211–1224 (2009)CrossRefGoogle Scholar
  4. 4.
    Böhm, J.: Orientation of image sequences in a point-based environment model. In: Sixth International Conference on 3-D Digital Imaging and Modeling, pp. 233–240 (2007)Google Scholar
  5. 5.
    Davison, A.J., Murray, D.W.: Simultaneous localization and map-building using active vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7), 865–880 (2002)CrossRefGoogle Scholar
  6. 6.
    Fernández-Caballero, A., López, M.T., Mira, J., Delgado, A.E., López-Valles, J.M., Fernández, M.A.: Modelling the stereovision-correspondence-analysis task by lateral inhibition in accumulative computation problem-solving method. Expert Systems with Applications 33(4), 955–967 (2007)CrossRefGoogle Scholar
  7. 7.
    Gemeiner, P., Jojic, P., Vincze, M.: Selecting good corners for structure and motion recovery using a time-of-flight camera. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5711–5716 (2009)Google Scholar
  8. 8.
    Guethmundsson, S.A., Pardas, M., Casas, J.R., Sveinsson, J.R., Aanaes, H., Larsen, R.: Improved 3D reconstruction in smart-room environments using ToF imaging. Computer Vision and Image Understanding 114, 1376–1384 (2010)CrossRefGoogle Scholar
  9. 9.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: The Fourth Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  10. 10.
    Hedge, G., Ye, C.: Extraction of planar features from Swissranger SR-3000 range images by a clustering method using normalized cuts. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4034–4039 (2009)Google Scholar
  11. 11.
    Klippenstein, J., Zhang, H.: Quantitative evaluation of feature extractors for visual SLAM. In: Proceedings of the Fourth Canadian Conference on Computer and Robot Vision, pp. 157–164 (2007)Google Scholar
  12. 12.
    Li, J., Allinson, N.M.: A comprehensive review of current local features for computer vision. Neurocomputing 71, 1771–1787 (2008)CrossRefGoogle Scholar
  13. 13.
    López, M.T., Fernández-Caballero, A., Mira, J., Delgado, A.E., Fernández, M.A.: Algorithmic lateral inhibition method in dynamic and selective visual attention task: Application to moving objects detection and labelling. Expert Systems with Applications 31(3), 570–594 (2006)CrossRefGoogle Scholar
  14. 14.
    López-Valles, J.M., Fernández, M.A., Fernández-Caballero, A.: Stereovision depth analysis by two-dimensional motion charge memories. Pattern Recognition Letters 28(1), 20–30 (2007)CrossRefGoogle Scholar
  15. 15.
    May, S., Droeschel, D., Holz, D., Fuchs, S., Malis, E., Nüchter, A., Hertzberg, J.: Three-dimensional mapping with time-of-flight cameras. Journal of Field Robotics 26(11-12), 934–965 (2009)CrossRefGoogle Scholar
  16. 16.
    Tissainayagam, P., Suter, D.: Assessing the performance of corner detectors for point feature tracking applications. Image and Vision Computing 22, 663–679 (2004)CrossRefGoogle Scholar
  17. 17.
    Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University Technical Report CMU-CS-91-132 (1991)Google Scholar
  18. 18.
    Weingarten, J.W., Gruener, G., Siegwart, R.: A state-of-the-art 3D sensor for robot navigation. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 2155–2160 (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sergio Almansa-Valverde
    • 1
  • José Carlos Castillo
    • 1
  • Antonio Fernández-Caballero
    • 1
    • 2
  • José Manuel Cuadra Troncoso
    • 3
  • Javier Acevedo-Rodríguez
    • 4
  1. 1.Instituto de Investigación en Informática de Albacete (I3A), n&aIS GroupAlbaceteSpain
  2. 2.Departamento de Sistemas InformáticosUniversidad de Castilla-La ManchaAlbaceteSpain
  3. 3.Departamento de Inteligencia Artificial, E.T.S.I. InformáticaUniversidad Nacional de Educación a DistanciaMadridSpain
  4. 4.Departamento de Teoría de la Señal y ComunicacionesUniversidad de AlcaláAlcalá de HenaresSpain

Personalised recommendations