Advertisement

Discrete Optimization Using Analog Neural Networks with Discontinuous Dynamics

  • M. Vidyasagar
Conference paper
Part of the ISNM International Series of Numerical Mathematics book series (ISNM, volume 121)

Abstract

In this paper, a new type of neural network is presented, that can be used to perform discrete optimization over a set of the form {0, 1}n. Unlike earlier neural networks, this network is characterized by the fact that the dynamics are discontinuous. It is shown that the discontinuous nature of the dynamics makes it possible to carry out quite a thorough analysis of the network trajectories. In particular, it is shown that, in the practically important case where the objective function to be maximized is a multilinear polynomial, almost all trajectories converge to a local maximum of the objective function. Moreover, it is possible to make the trajectories converge within a finite amount of time, and in fact arbitrarily quickly, to a local maximum. The results presented here open the way for the formulation of a suitable complexity theory for analog computation.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    L. Blum, M. Shub and S. Smale, “On a theory of computation and complexity over the real numbers: NP-Completeness, recursive functions and universal machines,” Bull. Amer. Math. Soc., Vol. 21, pp 1–46, 1989.MathSciNetzbMATHCrossRefGoogle Scholar
  2. [2]
    L. Faybusovich, “Dynamical systems which solve optimization problems with linear constraints,” IMA J. of Math. Control and Information, Vol. 8, pp. 135–149, 1991.MathSciNetzbMATHCrossRefGoogle Scholar
  3. [3]
    J. J. Hopfield, “Neural networks and physical systems with emergent collective computational capabilities,” Proc. Nat’l. Acad. Sci. (U.S.A.), Vol. 79, pp. 2554–2558, 1982.MathSciNetCrossRefGoogle Scholar
  4. [4]
    J. J. Hopfield, “Neurons with graded response have collective computational capabilities like those of two-state neurons,”Proc. Nat. Acad. Sci. (U.S.A.), Vol. 81, pp. 3088–3092, 1984.CrossRefGoogle Scholar
  5. [5]
    J. J. Hopfield and D. W. Tank, “‘Neural’ computation of decision optimization problems,” Biological Cybernetics, Vol. 52, pp. 141–152, 1985.MathSciNetzbMATHGoogle Scholar
  6. [6]
    D. Rumelhart and J. McClelland, Parallel Distributed Processing, Volumes I and II, M.I.T. Press, Cambridge, MA, 1986.Google Scholar
  7. [7]
    A. A. Schaffer and M. Yannakakis, “Simple local search problems that are hard to solve,” SIAM J. Computing, Vol. 20, pp. 56–87, 1991.MathSciNetCrossRefGoogle Scholar
  8. [8]
    D. W. Tank and J. J. Hopfield, “Simple ‘neural’ optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans, on Circ. and Sys., Vol. CAS-33, pp. 533–541, 1986.Google Scholar
  9. [9]
    M. Vidyasagar, Nonlinear Systems Analysis, 2nd Edition, Prentice-Hall, Englewood Cliffs, NJ, 1993.Google Scholar
  10. [10]
    M. Vidyasagar, “Minimum-Seeking Properties of Analog Neural Networks with Multilinear Objective Functions,” IEEE Transactions on Automatic Control, AC-40, No. 8, 1359–1375, August 1995.Google Scholar

Copyright information

© Birkhäuser Verlag Basel 1996

Authors and Affiliations

  • M. Vidyasagar
    • 1
  1. 1.Centre for Artificial Intelligence and RoboticsBangaloreIndia

Personalised recommendations