Illumination Insensitive Robot Self-Localization Using Panoramic Eigenspaces

  • Gerald Steinbauer
  • Horst Bischof
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3276)


We propose to use a robust method for appearance-based matching that has been shown to be insensitive to illumination and occlusion for robot self-localization. The drawback of this method is that it relies on panoramic images taken in one certain orientation, restricts the heading of the robot throughout navigation or needs additional sensors for orientation, e.g. a compass. To avoid these problems we propose a combination of the appearance-based method with odometry data. We demonstrate the robustness of the proposed self-localization against changes in illumination by experimental results obtained in the RoboCup Middle-Size scenario.


Mobile Robot Reference Image Reference Location Sensor Fusion Panoramic Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Cox, I.J.: Blanche -An Experiment in Guidance and Navigation of an Autonomous Robot Vehicle. IEEE Transactions on Robotics and Automation 7(2), 193–204 (1991)CrossRefGoogle Scholar
  2. 2.
    Fox, D., Burgard, W., Dellaert, F., Thrun, S.: Monte carlo localization: Efficient position estimation for mobile robots. In: AAAI/IAAI, pp. 343–349 (1999)Google Scholar
  3. 3.
    Dellaert, F., Burgard, W., Fox, D., Thrun, S.: Using the condensation algorithm for robust, vision-based mobile robot localization. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Fort Collins, CO. IEEE, Los Alamitos (1999)Google Scholar
  4. 4.
    Ishiguro, H., Tsuji, S.: Image-based memory of environment. In: Proc. of Int. Conf.on Intelligent Robots and Systems (IROS 1996), pp. 634–639 (1996)Google Scholar
  5. 5.
    Menegatti, E., Zoccarato, M., Pagello, E., Ishiguro, H.: Image-based Monte-Carlo Localisation without a Map. In: Proc. of the 8th Conference of the Italian Association for Artificial Intelligence (AI*IA), Pisa, Italy (2003)Google Scholar
  6. 6.
    Neto, G., Costelha, H., Lima, P.: Topological navigation in configuration space applied to soccer robots. In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds.) RoboCup 2003. LNCS (LNAI), vol. 3020, pp. 551–558. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  7. 7.
    Mayer, G., Utz, H., Kraetzschmar, G.: Playing Robot Soccer under Natural Light: A Case Study. In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds.) RoboCup 2003. LNCS (LNAI), vol. 3020, pp. 238–249. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  8. 8.
    Jogan, M., Leonardis, A., Wildenauer, H., Bischof, H.: Mobile robot localization under varying illumination. In: Proc.of 16th International Conference on Pattern Recognition (2002)Google Scholar
  9. 9.
    Bischof, H., Wildenauer, H., Leonardis, A.: Illumination Insensitive Recognition using Eigenspaces. Computer Vision and Image Understanding (2004) (to appear)Google Scholar
  10. 10.
    Jogan, M., Leonardis, A.: Robust localization using an omnidirectional appearance-based subspace model of environment. Robotics and Autonomous Systems 45(1), 51–72 (2003)CrossRefGoogle Scholar
  11. 11.
    Pajdla, T., Hlavac, V.: Zero phase representation of panoramic images for image based localization. In: Solina, F., Leonardis, A. (eds.) CAIP 1999. LNCS, vol. 1689, pp. 550–557. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  12. 12.
    Nayar, S.K., Nene, S.A., Murase, H.: Subspace methods for robot vision. IEEE Transaction on Robotics and Automation 12(5), 750–758 (1996)CrossRefGoogle Scholar
  13. 13.
    Leonardis, A., Bischof, H.: Robust recognition using eigenimages. Computer Vision and Image Understanding: CVIU 78(1), 99–118 (2000)CrossRefGoogle Scholar
  14. 14.
    Jogan, M., Leonardis, A.: Robust localization using panoramic view-based recognition. In: 15th ICPR, vol. 4, pp. 136–139. IEEE Computer Society, Los Alamitos (2000)Google Scholar
  15. 15.
    Bischof, H., Wildenauer, H., Leonardis, A.: Illumination insensitive eigenspaces. In: Proc.Int.Conf.on Computer Vision, vol. 1, pp. 233–238 (2001)Google Scholar
  16. 16.
    Maybeck, P.S.: The Kalman filter, An introduction to concepts. In: Autonomous Robot Vehicles, pp. 194–204 (1990)Google Scholar
  17. 17.
    Drolet, L., Michaud, F., Cote, J.: Adaptable sensor fusion using multiple kalman filters. In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS (2000)Google Scholar
  18. 18.
    Murase, H., Nayar, S.K.: Illumination planning for object recognition using parametric eigenspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(12), 1219–1227 (1994)CrossRefGoogle Scholar
  19. 19.
    Jogan, A.L.M., Zagar, E.: Karhunen-loeve transform of a set of rotated templates. IEEE Trans. on Image Processing 12(7), 817–825 (2003)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Steinbauer, G., Faschinger, M., Fraser, G., Mühlenfeld, A., Richter, S., Wöber, G., Wolf, J.: Mostly Harmless Team Description. In: Proceedings of the International RoboCup Symposium (2003)Google Scholar
  21. 21.
    Mayerhofer, L.: Odometry correction for the keksi omnidrive using an echo state network. Technical report, Institute for Theoretical Computer Science University of Technology Graz (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Gerald Steinbauer
    • 1
  • Horst Bischof
    • 2
  1. 1.Institute for Software TechnologyGraz University of TechnologyGrazAustria
  2. 2.Institute for Computer Graphics and VisionGraz University of TechnologyGrazAustria

Personalised recommendations