Advertisement

Model-referenced pose estimation using monocular vision for autonomous intervention tasks

  • 421 Accesses

Abstract

This study addresses vision-based underwater navigation techniques to automate underwater intervention tasks with robotic vehicles. A systematic procedure of model-referenced pose estimation is introduced to obtain the relative pose information between the underwater vehicle and the underwater structures whose geometry and shape are known. The vision-based pose estimation combined with inertial navigation enables underwater robots to navigate precisely around underwater structures for challenging underwater intervention tasks such as subsea construction, maintenance, and inspection. To demonstrate the feasibility of the proposed approach, a set of experiments were carried out in a test tank using an autonomous underwater vehicle.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  1. Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (SURF). Computer Vision and Image Understanding, 110(3), 346–359.

  2. Bouthemy, P. (1989). A maximum likelihood framework for determining moving edges. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(5), 499–511.

  3. Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010). BRIEF: Binary robust independent elementary features. In European conference on computer vision (pp. 778–792). Berlin: Springer.

  4. Chaumette, F., Marchand, E., Spindler, F., Tallonneau, R., & Yol, A. (2012). Computer vision algorithms. http://visp-doc.inria.fr/manual/visp-tutorial-computer-vision.pdf. Accessed 14 May 2018.

  5. Choi, C., & Christensen, H. I. (2018). Robust 3D visual tracking using particle filtering on the special euclidean group: A combined approach of keypoint and edge features. The International Journal of Robotics Research, 31(4), 498–519.

  6. Cieslak, P., Ridao, P., & Giergiel, M. (2015). Autonomous underwater panel operation by GIRONA500 UVMS: A practical approach to autonomous underwater manipulation. In IEEE International conference on robotics and automation (pp 529–536).

  7. Comport, A. I., Marchand, E., & Chaumette, F. (2004). Robust model-based tracking for robot vision. IEEE/RSJ International Conference on Intelligent Robots and Systems, 1, 692–697.

  8. Comport, A. I., Marchand, E., Pressigout, M., & Chaumette, F. (2006). Real-time markerless tracking for augmented reality: The virtual visual servoing framework. IEEE Transactions on Visualization and Computer Graphics, 12(4), 615–628.

  9. Dementhon, D. F., & Davis, L. S. (1995). Model-based object pose in 25 lines of code. International Journal of Computer Vision, 15(1), 123–141.

  10. Drummond, T., Society, I. C., & Cipolla, R. (2002). Real-time visual tracking of complex structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24, 932–946.

  11. Enrico, S., Giuseppe, C., Sandro, T., Alessandro, S., & Alessio, T. (2018). Floating underwater manipulation: Developed control methodology and experimental validation within the TRIDENT project. Journal of Field Robotics, 31(3), 364–385.

  12. Espiau, B., Chaumette, F., & Rives, P. (1992). A new approach to visual servoing in robotics. IEEE Transactions on Robotics and Automation, 8(3), 313–326.

  13. Evans, J., Redmond, P., Plakas, C., Hamilton, K., & Lane, D. (2003). Autonomous docking for intervention-AUVs using sonar and video-based real-time 3D pose estimation. MTS/IEEE OCEANS, 4, 2201–2210.

  14. Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24, 381–395.

  15. Han, J., Park, J., Kim, T., & Kim, J. (2015). Precision navigation and mapping under bridges with an unmanned surface vehicle. Autonomous Robots, 38(4), 349–362.

  16. Harris, C. (1993). Active Vision. Tracking with Rigid Models (pp. 59–73). Cambridge: MIT Press.

  17. Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.

  18. Kim, T., & Kim, J. (2014). Nonlinear filtering for terrain-referenced underwater navigation with an acoustic altimeter. In MTS/IEEE OCEANS (pp 1–6).

  19. Klein, G., & Murray, D. W. (2006). Full-3D edge tracking with a particle filter. In European conference on computer vision (pp. 1119–1128).

  20. Lewis, F. L. (1986). Optimal estimation: With an introduction to stochastic control theory. New York: Wiley.

  21. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

  22. Marani, G., Choi, S. K., & Yuh, J. (2009). Underwater autonomous manipulation for intervention missions AUVs. Ocean Engineering, 36(1), 15–23.

  23. Marchand, E., Spindler, F., & Chaumette, F. (2005). ViSP for visual servoing: A generic software platform with a wide class of robot control skills. IEEE Robotics and Automation Magazine, 12(4), 40–52.

  24. Mörwald. T., Prankl, J., Richtsfeld, A., Zillich, M., & Vincze, M. (2010). BLORT-the blocks world robotic vision toolbox. In IEEE international conference on robotics and automation.

  25. Palomeras, N., Carrera, A., Hurtós, N., Karras, G. C., Bechlioulis, C. P., Cashmore, M., et al. (2016). Toward persistent autonomous intervention in a subsea panel. Autonomous Robots, 40(7), 1279–1306.

  26. Palomeras, N., Peñalver, A., Massot-Campos, M., Vallicrosa, G., Negre, P.L., Fernández, J.J., Ridao, P., Sanz, P.J., Oliver-Codina, G., & Palomer, A. (2014). I-AUV docking and intervention in a subsea panel. In 2014 IEEE/RSJ international conference on intelligent robots and systems (pp. 2279–2285).

  27. Prats, M., Ribas, D., Palomeras, N., García, J. C., Nannen, V., Wirth, S., et al. (2012). Reconfigurable AUV for intervention missions: A case study on underwater object recovery. Intelligent Service Robotics, 5(1), 19–31.

  28. Pressigout, M., & Marchand, E. (2006). Real-time 3D model-based tracking: combining edge and texture information. In IEEE international conference on robotics and automation (pp. 2726–2731).

  29. Pupilli, M., & Calway, A. (2006). Real-time camera tracking using known 3D models and a particle filter. In 18th International conference on pattern recognition(vol. 1, pp. 199–203).

  30. Ridao, P., Carreras, M., Ribas, D., Sanz, P. J., & Oliver, G. (2014). Intervention AUVs: The next challenge. IFAC Proceedings Volumes, 47(3), 12146–12159.

  31. Rosten, E., & Drummond, T. (2005). Fusing points and lines for high performance tracking. In Tenth IEEE international conference on computer vision (pp. 1508–1515).

  32. Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In European conference on computer vision (pp. 430–443).

  33. Rublee, E., Rabaud, V., Konolige, K., Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In International conference on computer vision (pp. 2564–2571).

  34. Sanz, P. J., Ridao, P., Oliver, G., Casalino, G., Petillot, Y., Silvestre, C., et al. (2013). TRIDENT an European project targeted to increase the autonomy levels for underwater intervention missions. In MTS/IEEE OCEANS (pp. 1–10).

  35. Vacchetti, L., Lepetit, V., & Fua, P. (2004). Combining edge and texture information for real-time accurate 3D camera tracking. In Third IEEE and ACM international symposium on mixed and augmented reality (pp 48–56).

Download references

Author information

Correspondence to Jinwhan Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 117216 KB)

Supplementary material 1 (mp4 117216 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Park, J., Kim, T. & Kim, J. Model-referenced pose estimation using monocular vision for autonomous intervention tasks. Auton Robot (2019) doi:10.1007/s10514-019-09886-9

Download citation

Keywords

  • Underwater navigation
  • Underwater robot
  • Model-referenced pose estimation
  • Underwater intervention task