3-D Modeling for Robotic Tactile Object Recognition

  • Peter K. Allen

Abstract

Solid modeling techniques have proven quite successful in the design and synthesis of objects for manufacturing. However, solid modeling techniques have proven less successful in object recognition tasks. The creation of a CAD based robotics cell requires the ability to perform shape recognition from a variety of sensor sources, including vision, touch and ranging. Superquadric models have been used successfully in visual recognition tasks, and they appear to possess a number of important attributes for robotic tactile object recognition tasks that need to derive shape from sparse tactile sensor data. Superquadrics can be used to model many complex shapes, including arbitrary taperings and bendings, within a relatively small and stable Refsmeter space. This paper discusses the components of such a model and its relationship to active tactile sensing strategies with a multi-fingered robotic hand.

Keywords

Alan 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen, Peter K., Robotic object recognition using vision and touch, Kluwer Academic Publishers, Boston, 1987.CrossRefGoogle Scholar
  2. 2.
    Bajcsy, Ruzena and Franc Solina, “Three dimensional object representation revisited,” Proceedings International Conference on Computer Vision, London, June 1987.Google Scholar
  3. 3.
    Barr, Alan, “Superquadrics and angle preserving transformations,” IEEE Computer Graphics and Applications, vol. 1, pp. 11–23, 1981.CrossRefGoogle Scholar
  4. 4.
    Bhanu, B., “CAD based robot vision,” IEEE Computer, August 1987.Google Scholar
  5. 5.
    Biederman, Irving, “Human image understanding: Recent research and a theory,” Computer Vision, Graphics, and Image Processing, vol. 32, pp. 29–73, 1985.CrossRefGoogle Scholar
  6. 6.
    Boujt, T. and A. Gross, “On the recovery of superellipsoids,” Proceedings DARPA Image Understanding Conference, pp. 1052–1063, Cambridge, MA, April, 1988.Google Scholar
  7. 7.
    Brooks, Rodney, “Symbolic reasoning among 3-D models and 2-D images,” Artificial Intelligence, vol. 17, pp. 285–349, 1981.CrossRefGoogle Scholar
  8. 8.
    Ellis, R., Edward Riseman, and A. R. Hanson, “Tactile recognition by probing: Identifying a polygon on a plane,” Proceedings of AAAI-86, pp. 632–637, Philadelphia, August 11–15, 1986.Google Scholar
  9. 9.
    Grimson, W. E. L. and Tomas Lozano-Perez, “Model based recognition and localization from sparse three dimensional sensory data,” A.I. memo 738, M.I.T. A.I. Laboratory, Cambridge, August 1983.Google Scholar
  10. 10.
    Iberall, Thea, Geoffrey Bingham, and Michael Arbib, “Opposition space as a structuring concept fro the anlysis of skilled hand movements,” COINS TR 85–19, Dept. of Computer and Information Science, University of Massachusetts, July 1985.Google Scholar
  11. 11.
    Klatzky, Roberta and Susan Lederman, “Hand movements: A window into haptic object récognition,” Cognitive Science Technical Report 8606, Un’rv. of California, Santa Barbara.Google Scholar
  12. 12.
    Marr, David, Vision, W. Freeman, San Francisco, 1982.Google Scholar
  13. 13.
    Pentland, Alex P., “Recognition by parts,” Technical Report 406, SRI International, December 16, 1986.Google Scholar
  14. 14.
    Stansfield, Sharon, “Visually-guided haptic object recognition,” Ph.D. dissertation, Department of Computer and Information Science, University of Pennsylvania, October 1987.Google Scholar
  15. 15.
    Tise, B., “A compact high resolution piezoresistivedigital tactile sensor,” IEEE Conference on Robtoics and Automation, pp. 760–764, Philadelphia, April 24–29, 1988.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1989

Authors and Affiliations

  • Peter K. Allen
    • 1
  1. 1.Department of Computer ScienceColumbia UniversityNew YorkUSA

Personalised recommendations