An Evaluation of Camera Pose Methods for an Augmented Reality System: Application to Teaching Industrial Robots

  • Madjid Maidi
  • Malik Mallem
  • Laredj Benchikh
  • Samir Otmane
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7420)


In automotive industry, industrial robots are widely used in production lines for many tasks such as welding, painting or assembly. Their use requires, from users, both a good manipulation and robot control. Recently, new tools have been developed to realize fast and accurate trajectories in many production sectors by using the real prototype of vehicle or a generalized design within a virtual simulation platform. However, many issues could be considered in these cases: the delay between the design of the vehicle and its production is often important, moreover, the virtual modeling presents a non realistic aspect of the real robot and vehicle, so this factor could introduce localization inacurracies in performing trajectories. Our work is registered as a part of TRI project (Teleteaching Industrial Robots) which aims to realize a demonstrator showing the interaction of industrial robots with virtual components and allowing to train users to perform successfully their tasks on a virtual representation of a production entity.

In this project we make use of Augmented Reality (AR) techniques to overlay virtual objects onto the real world in order to enhance the user’s perception and interaction while performing a specific industrial task. The idea is to allow the real robot to teach trajectories of an automotive task thanks to vehicle virtual model. The pose accuracy is prerequisite of our application since it allows a reliable teaching of the real trajectory. Therefore, we survey some vision-based pose computation algorithms and present a method that offers increased robustness and accuracy in the context of real-time AR tracking. Our aim is to determine the performance of these pose estimation methods in term of errors and distance evaluation. The evaluation of the pose estimation methods was obtained using a series of tests and an experimental protocol. The analysis of results shows the performance of algorithms in term of accuracy, stability and convergence.


Augmented Reality pose estimation industrial robot computer vision real-time tracking 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ansar, A., Daniilidis, K.: Linear pose estimation from points or lines. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(5), 578–589 (2003)CrossRefGoogle Scholar
  2. 2.
    Araujo, H., Carceroni, R., Brown, C.: A Fully Projective Formulation for Lowe’s Tracking Algorithm. Technical Report 641, University of Rochester, USA (1996)Google Scholar
  3. 3.
    Besl, P., McKay, N.: A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2), 239–256 (1992)CrossRefGoogle Scholar
  4. 4.
    Biegelbauer, G., Pichler, A., Vincze, M., Nielsen, C.L., Andersen, H.J., Haeusler, K.: The inverse approach of flexpaint. IEEE Robotics & Automation Magazine 12(3), 24–34 (2005)CrossRefGoogle Scholar
  5. 5.
    Bischoff, R., Kurth, J.: Concepts, tools and devices for facilitating interaction with industrial robots through augmented reality. In: ISMAR Workshop on Industrial Augmented Reality, Santa Barbara, CA, USA, October 22 (2006)Google Scholar
  6. 6.
    Chai, L., Hoff, W.A., Vincent, T.: Three-dimensional motion and structure estimation using inertial sensors and computer vision for augmented reality. In: Presence: Teleoperators and Virtual Environments, Cambridge, MA, USA, vol. 11, pp. 474–492 (2002)Google Scholar
  7. 7.
    Chong, J.W.S., Nee, A.Y.C., Youcef-Toumi, K., Ong, S.K.: An application of augmented reality (ar) in the teaching of an arc welding robot. In: Innovation in Manufacturing Systems and Technology, IMST (2005)Google Scholar
  8. 8.
    DeMenthon, D., Davis, L.S.: Model-based object pose in 25 lines of code. International Journal of Computer Vision 15(1-2), 123–141 (1995)CrossRefGoogle Scholar
  9. 9.
    DeMenthon, D., Davis, L.S.: Exact and approximate solutions of the perspective-three-point problem. IEEE Trans. Pattern Anal. Mach. Intell. 14(11), 1100–1105 (1992)CrossRefGoogle Scholar
  10. 10.
    Dhome, M., Richetin, M., Lapreste, J.T., Rives, G.: Determination of the attitude of 3d objects from a single perspective view. IEEE Trans. Pattern Anal. Mach. Intell. 11(12), 1265–1278 (1989)CrossRefGoogle Scholar
  11. 11.
    Ennakr, S., Domingues, C., Benchikh, L., Otmane, S., Mallem, M.: Towards robot teaching based on virtual and augmented reality concepts. In: 2nd Mediterranean Conference on Intelligent Systems and Automation (CISA 2009), vol. 1107, pp. 337–341 (2009)Google Scholar
  12. 12.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Forstner, W.: Reliability analysis of parameter estimation in linear models with applications to mensuration problems in computer vision. Computer Vision, Graphics and Image Processing 40, 273–310 (1987)CrossRefGoogle Scholar
  14. 14.
    Haralick, R.M., Ottenberg, K., Lee, C., Nolle, M.: Analysis and solutions of the three point perspective pose estimation problem. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, Maui, Hawaii, pp. 592–598 (1991)Google Scholar
  15. 15.
    Horn, B.K.B., Hilden, H.M., Negahdaripour, S.: Closed-form solution of absolute orientation using orthonormal matrices. Journal of the Optical Society of America A 5, 1127–1135 (1988)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Kato, H., Billinghurst, M., Poupyrev, I., Imamoto, K., Tachibana, K.: Virtual object manipulation on a table-top ar environment. In: Proceedings of the International Symposium on Augmented Reality (ISAR 2000), Munich, Germany, pp. 111–119 (October 2000)Google Scholar
  17. 17.
    Lowe, D.G.: Three-dimensional object recognition from single two-dimensional image. Artificial Intelligence 31, 355–395 (1987)CrossRefGoogle Scholar
  18. 18.
    Lu, C.P., Hager, G.D., Mjolsness, E.: Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(6), 610–622 (2000)CrossRefGoogle Scholar
  19. 19.
    Maidi, M., Ababsa, F., Mallem, M.: Active contours motion based on optical flow for tracking in augmented reality. In: 8th International Conference on Virtual Reality (VRIC 2006), Laval, France, pp. 215–222 (2006)Google Scholar
  20. 20.
    Maidi, M., Didier, J.-Y., Ababsa, F., Mallem, M.: A performance study for camera pose estimation using visual marker based tracking. Machine Vision and Applications, IAPR International Journal (2008)Google Scholar
  21. 21.
    Moravec, H.P.: Towards automatic visual obstacle avoidance. In: Proc. 5th Int. Joint Conf. Artificial Intell., Cambridge, Massachusetts, USA, vol. 2, p. 584 (August 1977)Google Scholar
  22. 22.
    Ong, S.K., Chong, J.W.S., Nee, A.Y.C.: Methodologies for immersive robot programming in an augmented reality environment. In: GRAPHITE 2006: Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, pp. 237–244. ACM, New York (2006)CrossRefGoogle Scholar
  23. 23.
    Pettersen, T., Pretlove, J., Skourup, C., Engedal, T., Lkstad, T.: Augmented reality for programming industrial robots. In: ISMAR 2003: Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality, p. 319. IEEE Computer Society, Washington, DC (2003)CrossRefGoogle Scholar
  24. 24.
    Quan, L., Lan, Z.D.: Linear n-point camera pose determination. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(8), 774–780 (1999)CrossRefGoogle Scholar
  25. 25.
    Satoh, K., Takemoto, K., Uchiyama, S., Yamamoto, H.: A registration evaluation system using an industrial robot. In: ISMAR 2006: Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 79–87. IEEE Computer Society, Washington, DC (2006)Google Scholar
  26. 26.
    Shaheen, M., Mallem, M., Chavand, F.: Visual command of a robot using 3d scene reconstruction in an augmented reality system. Control Engineering Practice Elsevier Sciences Ltd. 9(4), 375–385 (2001)Google Scholar
  27. 27.
    Shimizu, N., Sugimoto, M., Sekiguchi, D., Hasegawa, S., Inami, M.: Mixed reality robotic user interface: virtual kinematics to enhance robot motion. In: ACE 2008: Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, pp. 166–169. ACM, New York (2008)CrossRefGoogle Scholar
  28. 28.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(11), 1330–1334 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Madjid Maidi
    • 1
  • Malik Mallem
    • 1
  • Laredj Benchikh
    • 1
  • Samir Otmane
    • 1
  1. 1.IBISC LaboratoryÉvry CedexFrance

Personalised recommendations