Advertisement

Human-machine Cooperative Manipulation with Vision-based Motion Constraints

  • Gregory D. Hager
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 401)

Abstract

This chapter discusses a class of control algorithms that provide enhanced physical dexterity by imposing passive motion constraints. Such motion constraints are often referred to as virtual fixtures. It is shown that algorithms originally designed for vision-based control of manipulators can be easily converted into control algorithms that provide virtual fixtures. As a result it is possible to create advanced human-machine cooperative manipulation systems that take complete advantage of information provided by vision, yet permit the user to retain control of essential aspects of a given task.

Keywords

Prefer Direction Retinal Vein Occlusion Central Retinal Vein Occlusion Visual Servoing Motion Constraint 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bettini, A., Lang, S., Okamura, A., Hager, G.: Vision assisted control for manipulation using virtual fixtures. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1171–1176 (2001)Google Scholar
  2. 2.
    Bettini, A., Lang, S., Okamura, A., Hager, G.: Vision assisted control for manipulation using virtual fixtures: Experiments at macro and micro scales. In: Proc. IEEE International Conference on Robot. Automat., pp. 3354–3361 (2002)Google Scholar
  3. 3.
    Chaumette, F., Rives, P., Espiau, B.: Classification and realization of the different vision-based tasks. In: Hashimoto, K. (ed.) Visual Servoing, pp. 199–228. World Scientific, Singapore (1994)Google Scholar
  4. 4.
    Corke, P., Hutchinson, S.: A new partitioned approach to image-based visual servo control. IEEE Trans. Robot. Autom. 17(4), 507–515 (2001)CrossRefGoogle Scholar
  5. 5.
    Cowan, N., Weingarten, J., Koditschek, D.: Vision-based control via navigation functions. IEEE Trans. Robot. Autom. (2002) (to appear) Google Scholar
  6. 6.
    Dewan, M., Marayong, P., Okamura, A., Hager, G.D.: Vision-Based Assistance for Ophthalmic Micro-Surgery. In: Proceedings of Seventh International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), vol. 2, pp. 49–57 (2004)Google Scholar
  7. 7.
    Dodds, Z.: Task specification languages for uncalibrated visual servoing. Ph.D. thesis, Yale University (2000)Google Scholar
  8. 8.
    Dodds, Z., Hager, G., Morse, A., Hespanha, J.: Task specification and monitoring for uncalibrated hand/eye coordination. In: Proc. IEEE Int. Conference Rob. Automat., pp. 1607–1613 (1999)Google Scholar
  9. 9.
    Forsyth, D., Ponce, J.: Computer Vision: A Modern Approach. Prentice Hall, Englewood Cliffs (2002)Google Scholar
  10. 10.
    Hager, G.D.: A modular system for robust hand-eye coordination. IEEE Trans. Robot. Automat. 13(4), 582–595 (1997)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Hager, G.D., Toyama, K.: The “XVision” system: A general purpose substrate for real-time vision applications. Comp. Vision, Image Understanding 69(1), 23–27 (1998)CrossRefGoogle Scholar
  12. 12.
    Haralick, R.M., Shapiro, L.G.: Computer and Robot Vision. Addison Wesley, Reading (1993)Google Scholar
  13. 13.
    Hespanha, J., Dodds, Z., Hager, G.D., Morse, A.S.: What tasks can be performed with an uncalibrated stereo vision system? International Journal of Computer Vision 35(1), 65–85 (1999)CrossRefGoogle Scholar
  14. 14.
    Hutchinson, S., Hager, G.D., Corke, P.: A tutorial introduction to visual servo control. IEEE Trans. Robot. Automat. 12(5), 651–670 (1996)CrossRefGoogle Scholar
  15. 15.
    Jägersand, M.: On-line estimation of visual-motor models for robot control and visual simulation. Ph.D. thesis, University of Rochester (1997)Google Scholar
  16. 16.
    Kim, D., Rizzi, A., Hager, G.D., Koditschek, D.: A “robust” convergent visual servoing system. In: Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. I, pp. 348–353. IEEE Computer Society Press, Los Alamitos (1995)Google Scholar
  17. 17.
    Kragic, D., Marayong, P., Li, M., Okamura, A., Hager, G.: Human-Machine Collaborative Systems for Microsurgical Applications. The International Journal of Robotics Research 24(9), 731–741 (2005)CrossRefGoogle Scholar
  18. 18.
    Kumar, R., Goradia, T.M., Barnes, A., Jensen, P.: Performance of robotic augmentation in microsurgery-scale motions. In: Taylor, C., Colchester, A. (eds.) MICCAI 1999. LNCS, vol. 1679, pp. 1108–1115. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  19. 19.
    Kumar, R., Hager, G., Jensen, P., Taylor, R.H.: An augmentation system for fine manipulation. In: Proc. Medical Image Computing and Computer Assisted Intervention, pp. 956–965. Springer, Heidelberg (2000)Google Scholar
  20. 20.
    Kumar, R., Jensen, P., Taylor, R.H.: Experiments with a steady hand robot in constrained compliant motion and path following. In: IEEE ROMAN, pp. 92–97 (1999)Google Scholar
  21. 21.
    Lai, F., Howe, R.: Evaluating control modes for constrained robotic surgery. In: IEEE International Conference on Robotics and Automation, pp. 603–609 (2000)Google Scholar
  22. 22.
    Lin, H., Marayong, P., Mills, K., Karam, R., Kazanzides, P., Okamura, A., Hager, G.D.: Portability and Applicability of Virtual Fixtures Across Medical and Manufacturing Tasks. In: Proc. of International Conference on Robotics and Automation (ICRA), pp. 225–230 (2006)Google Scholar
  23. 23.
    Malis, E., Chaumette, F., Boudet, S.: 2 1/2 D visual servoing. IEEE Trans. Robot. Autom. 15(2), 238–250 (1999)CrossRefGoogle Scholar
  24. 24.
    Marayong, P., Bettini, A., Okamura, A.: Effect of virtual fixture compliance on human-machine cooperative manipulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2001) (to appear)Google Scholar
  25. 25.
    Marayong, P., Hager, G., Okamura, A.: Control methods for guidance virtual fixtures in compliant human-machine interfaces. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008. IROS 2008, pp. 1166–1172 (2008)Google Scholar
  26. 26.
    Papanikolopoulos, N.P., Khosla, P.K.: Adaptive Robot Visual Tracking: Theory and Experiments. IEEE Trans. on Automatic Control 38(3), 429–445 (1993)zbMATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Pezzementi, Z., Okamura, A., Hager, G.D.: Dynamic Guidance with Pseudoadmittance Virtual Fixtures. In: International Conference on Robotics and Automation (ICRA), pp. 1761–1767 (2007), /CIRL/publications/pdf/PezzementiICRA07.pdf Google Scholar
  28. 28.
    Rosenberg, L.: Virtual fixtures. Ph.D. thesis, Dept. of Mechanical Engineering, Stanford University (1994)Google Scholar
  29. 29.
    Strang, G. (ed.): Linear Algebra and its Applications. Academic Press, New York (1980)Google Scholar
  30. 30.
    Taylor, R., Jensen, P., Whitcomb, L., Barnes, A., Kumar, R., Stoianovici, D., Gupta, P., Wang, Z.X.: Steady-hand robotic system for microsurgical augmentation. The International Journal of Robotics Research 18(12), 1201–1210 (1999)CrossRefGoogle Scholar
  31. 31.
    Weiss, J.: Injection of tissue plasminogen activator into a branch retinal vein in eyes with central retinal vein occlusion. Opthalmology 108(12), 2249–2257 (2001)CrossRefGoogle Scholar

Copyright information

© Springer London 2010

Authors and Affiliations

  • Gregory D. Hager
    • 1
  1. 1.Department of Computer ScienceJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations