Image Understanding for Robotic Applications

  • Ramesh Jain
Conference paper
Part of the NATO ASI Series book series (volume 33)


Current generation robots work in a constrained environment. In most robotic applications, the environment is known and several aspects of it may be controlled. The knowledge about the environment may be used by image understanding algorithms to facilitate the recovery of information from images. Some difficult problems faced by general image understanding systems are simplified by developing techniques that exploit the available knowledge about the environment. We demonstrate the role of such knowledge in robotic applications for recovering information in dynamic scenes. In the first application, the knowledge of ego-motion parameters of a mobile robot is used for segmentation of a scene and recovery of depth information. In the second application, a hypothesize and test approach is used to find road edges in real scenes for an autonomous vehicle.


Mobile Robot Optical Flow Edge Point Camera Parameter Dynamic Scene 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [Bar82]
    Barnard, S.T., “Methods for interpreting perspective images”, Proc. of the Image Understanding Workshop, Stanford University, Palo Alto, Calif., 1982, 193–203.Google Scholar
  2. [Cav78]
    Cavanaugh, P., “Size and position invariance in the visual system”, Perception, vol. 7, pp. 167–177, 1978.CrossRefGoogle Scholar
  3. [Cav81]
    Cavanaugh, P., “Size invariance: reply to Schwartz”, Perception, col. 10, pp. 469–474, 1981.CrossRefGoogle Scholar
  4. [ChW79]
    Chaikin, G. and C. Weiman, “Log spiral grids in computer pattern recognition”, Computer Graphics and Pattern Recognition, vol. 4, pp. 197–226, 1979.Google Scholar
  5. [Clo80]
    Clocksin, W.F., “Perception of surface slant and edge labels from optical flow: A computational approach”, Perception, vol. 9, 1980, pp. 253–269.CrossRefGoogle Scholar
  6. [DaK85]
    Davis, L.S., and T.R. Kushner, “Road Boundary Detection for Autonomous Vehicle Navigation”, CS-TR-1538, Center for Automation Research, University of Maryland, July 1985.Google Scholar
  7. [Gib79]
    Gibson, J.J., The ecological approach to visual perception, Houghton Mifflen, Boston, 1979.Google Scholar
  8. [GSC79]
    Giralt, G., R. Sobek, and A. Chatila, “A multi-level planning and navigation system for a mobile robot”, Proc. 6th IJCAI, Tokyo, 1979, 355–337.Google Scholar
  9. [IMB84]
    Inigo, R.M., E.S. McVey, B.J. Berger, and M.J. Wirtz, “Machine vision applied to vehicle guidance”, IEEE Trans. on Pattern Analysis and Machine Intelligence, 6, No. 6, 1984, 820–826.CrossRefGoogle Scholar
  10. [Jai84]
    Jain, R., “Segmentation of frame sequences obtained by a moving observer”, IEEE Trans. on PAMI, pp. 624–629, Sept. 1984.Google Scholar
  11. [Jai85]
    Jain, R., “Dynamic scene analysis”, in Progress in Pattern Recognition, 2, Ed. L. Kanal and A. Rosenfeld, North Holland, 1985.Google Scholar
  12. [Ja085]
    Jain, R. and N. O’Brien, “Ego-Motion Complex Logarithmic Mapping”, SPIE, Nov. 1984.Google Scholar
  13. [JBP86]
    Jain, R., S. Bartlett, and N. O’Brien, “Motion Stereo Using Ego-Motion Complex Logarithmic Mapping”, Technical Report, RSD-TR-3–86, The University of Michigan, Feb. 1986Google Scholar
  14. [JeJ84]
    Jerian, C. and R. Jain, “Determining motion parameters for scene with translation and rotation”, IEEE Trans. on PAMI, Vol. 6, No. 4, July 1984, 523–530.CrossRefGoogle Scholar
  15. [Lee80]
    Lee, D.N., “The optic flow field: The foundation of vision”, Phil. Trans. Royal Society of London, vol. B290, 1980, pp. 169–179.CrossRefGoogle Scholar
  16. [MaA84]
    Magee, M. J. and J.K. Aggarwal, “Determining vanishing points from perspective images” Computer Vision, Graphics and Image Processing, 26, 1984, 256–267.CrossRefGoogle Scholar
  17. [Mor81]
    Moravec, H.P., Robot Rover Visual Navigation, UMI Research Press, Ann Arbor, 1981.Google Scholar
  18. [MST85]
    Massone, L. and G. Sandini and V. Tagliasco,“’FormInvariant’ Topical Mapping Strategy for 2D Shape Recognition”, Computer Vision, Graphics and Image Processing, Vol. 30, pp. 169–188, 1985.CrossRefGoogle Scholar
  19. [ObJ84]
    O’Brien, N. and R. Jain, “Axial Motion Stereo”, Proc. of Workshop on Computer Vision, pp. 88–92, Annapolis, Maryland, April 1984.Google Scholar
  20. [Pra80]
    Prazdny, K., “Egomotion and relative depth map from optical flow”, Biological Cybernetics, vol. 36, 1980, pp. 87–102.MathSciNetMATHCrossRefGoogle Scholar
  21. [Sch80]
    Schwartz, E.L., “Computational anatomy and functional architecture of striate cortex: a spatial mapping approach to coding”, Vision Research, 20, 1980, pp. 645669.Google Scholar
  22. [Sch81]
    Schwartz, E.L., “Cortical anatomy, size invariance, and spatial frequency analysis”, Perception, vol. 10, pp. 455–468, 1981.CrossRefGoogle Scholar
  23. [Sch85]
    Schunck, B.G., “Image Flow: Fundamentals and Future Research”, Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, June 1985.Google Scholar
  24. [WMS85]
    Waxman, A. M., J. Le Moirne, and B. Srinivasan, “Visual navigation of roadways”, Proc. 1985 IEEE International Conf. on Robotics and Automation, St. Louis, March 1985, 862–867.CrossRefGoogle Scholar
  25. [WST85]
    Wallace, R., A. Stentz, C. Thorpe, H. Moravec, W. Whittaker and T. Kanade, “First Results in Robot Road-Following”, Proc. of 9th IJCAI, Los Angeles, CA, Aug. 18–23, 1985, 1089–1095.Google Scholar
  26. [YIT83]
    Yachida, M., T. Ichinose, and S. Tsuji, “Model-guided monitoring of a building environment by a mobile robot”, Proc. 8th IJCAI, Munich, 1983, 1125–1127.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1987

Authors and Affiliations

  • Ramesh Jain
    • 1
  1. 1.Computer Vision Research Laboratory Electrical Engineering and Computer ScienceThe University of MichiganAnn ArborUSA

Personalised recommendations