Camera Self-localization Using Uncalibrated Images to Observe Prehistoric Paints in a Cave

  • Tommaso Gramegna
  • Grazia Cicirelli
  • Giovanni Attolico
  • Arcangelo Distante
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3617)


The fruition of archaeological caves, hardly accessible by visitors, can benefit from a mobile vehicle which transmits to users located outside a continuous stream of images of the cave that can be visually integrated with information and data to increase the fruition and understanding of the site. This application requires self-positioning the vehicle with respect to a target. Preserving the cave imposes the use of natural landmarks as reference points, possibly using uncalibrated techniques. We have applied the modified POSIT algorithm (camera pose estimation method using uncalibrated images) to self-position the robot. To account for the difficulty of evaluating natural landmarks in the cave the tests have been made using a photograph of the prehistoric wall paintings of the archeological cave “Grotta dei Cervi”. The modified version of the POSIT has been compared with the original formulation using a suitably designed grid target. Therefore the performance of the modified POSIT has been evaluated by computing the position of the robot with respect to the target on the base of feature points automatically identified on the picture of a real painting. The results obtained using the experimental tests in our laboratory are very encouraging for the experimentation in the real environment.


Feature Point Wall Painting Grid Target Natural Landmark Uncalibrated Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Malis, E.: Survey of vision-based robot control. In: ENSIETA European Naval Ship Design Short Course, Brest, France (2002)Google Scholar
  2. 2.
    Sugihara, K.: Some location problems for robot navigation using a single camera. Computer Vision Graphics and Image Processing 42, 112–129 (1988)CrossRefGoogle Scholar
  3. 3.
    Betke, M., Gurvits, L.: Mobile Robot Localization Using Landmarks. IEEE Transactions On Robotics And Automation 13(2), 251–263 (1997)CrossRefGoogle Scholar
  4. 4.
    DeMenthon, D.F., Davis, L.S.: Model-based object pose in 25 lines of code. International Journal of Computer Vision 15(2), 123–141 (1995)CrossRefGoogle Scholar
  5. 5.
    Basri, R., Rivlin, E., Shimshoni, I.: Visual homing: surfing on the epipole. International Journal of Computer Vision 33(2), 117–137 (1999)CrossRefGoogle Scholar
  6. 6.
    Gramegna, T., Venturino, L., Cicirelli, G., Attolico, G.: Visual servoing based positioning of a mobile robot. In: Proceedings of the 35th International Symposium on Robotics (2004)Google Scholar
  7. 7.
    Gramegna, T., Venturino, L., Cicirelli, G., Attolico, G., Distante, A.: Optimization of the POSIT algorithm for indoor autonomous navigation. Robotics and Autonomous Systems 48(2-3), 145–162 (2004)CrossRefGoogle Scholar
  8. 8.
    Harris, C.G., Stephens, M.J.: A Combined Corner and Edge Detector. In: Proceedings of Fourth Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  9. 9.
    Salvi, J., Armangué, X., Batlle, J.: A Comparative Review of Camera Calibrating Methods with Accuracy Evaluation. Pattern Recognition 35(7), 1617–1635 (2002)zbMATHCrossRefGoogle Scholar
  10. 10.
    Konecny, G., Pape, D.: Correlation Techniques and Devices. Photogrammetric Engineering and Remote Sensing, 323–333 (1981)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Tommaso Gramegna
    • 1
  • Grazia Cicirelli
    • 1
  • Giovanni Attolico
    • 1
  • Arcangelo Distante
    • 1
  1. 1.Institute of Intelligent Systems for Automation – C.N.R.BariItaly

Personalised recommendations