A neural paradigm for time-varying motion segmentation

  • Yang Jing’an 
Regular Papers


This paper proposes a new neural algorithm to perform the segmentation of an observed scene, into regions corresponding to different moving objects by analyzing a time-varying images sequence. The method consists of a classification step, where the motion of small patches, is characterized through an optimization approach, and a segmentation step merging neighboring patches characterized by the same motion. Classification of motion is performed without optical flow computation, but considering only the spatial and temporal image gradients into an appropriate energy function minimized with a Hopfield-like neural network giving as output directly the 3D motion parameter estimates. Network convergence is accelerated by integrating the quantitative estimation of motion parameters with a qualitative estimate of dominant motion using the geometric theory of differential equations.


qualitative description of motion field time-varying image sequence geometric theory of differential equation Hopfield-like neural network quantitative interpretation 


  1. [1]
    Bajcsy R, Campos M. Active and exploratory perception.International Journal of Computer Vision, 1992, 56(1): 31–40.MATHGoogle Scholar
  2. [2]
    Shu C F, Jain R. Vector field analysis for oriented pattern.IEEE Trans. PAMI, 1994, 16(9): 946–950.Google Scholar
  3. [3]
    Campani M, Verri A. Motion analysis from first-order properties of optical flow.CVGIP: Image Understanding, 1992, 56(1): 90–107.MATHCrossRefGoogle Scholar
  4. [4]
    Black M J, Anandan P. The robust estimation of multiple motions: Parametric and plecewise-smooth flow fields.Computer Vision and Image Understanding 1996, 63(1): 75–104.CrossRefGoogle Scholar
  5. [5]
    Rao A R, Jain R C. Computerized flow field analysis: Oriented texture fields.IEEE Trans. PRMI, 1992, 14(7): 693–709.Google Scholar
  6. [6]
    Verri A, Girosi F, Torre V. Mathematical properties of the two-dimensional motion field, from singular points to motion parameters.J. Opt. Soc. Am, 1989 6(5): 912–921.MathSciNetCrossRefGoogle Scholar
  7. [7]
    Verri A, Poggio T Motion field and optical flow: Qualitative properties.IEEE Trans. PRMI, 1990, 11(5): 490–498.Google Scholar
  8. [8]
    Chellappa W R, Zheng Q. Experiments on estimating egomotion and structure parameters using long monocular image sequence.International Journal of Computer Vision 1995, 15: 492–515.Google Scholar
  9. [9]
    Zhou Y T, Chellappa R. A heural network for motion processing. InNeural Network for Perception, 1993, Vol. 1 (Academic Press-ISBN0-12-741251-4), pp.492–515.Google Scholar

Copyright information

© Science Press, Beijing China and Allerton Press Inc. 1999

Authors and Affiliations

  • Yang Jing’an 
    • 1
    • 2
  1. 1.Institute of Artificial IntelligenceHefei University of TechnologyHefeiP.R. China
  2. 2.International Center for Theoretical PhysicsTriesteItaly

Personalised recommendations