Abstract
The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.
Similar content being viewed by others
References
Hopfield J J, Tank D W. Neural computation of decision in optimization problem[J]. Biol Cybern, 1985, 52(1):141–152.
Tank D W, Hopfield J J. Simple ‘neural’ optimization networks:an A/D converter, signal decision circuit, and a linear programming circuit[J]. IEEE Trans Circuits Syst (I), 1988, 35(5):554–562.
Bouzerdoum A, Pattison T R. Neural network for quadratic otimization with bound constraints[J]. IEEE Transactions on Neural Networks, 1993, 4(2):293–303.
Perez-Ilzarbe M J. Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraints[J]. IEEE Transactions on Neural Networks, 1998, 9(6):1344–1351.
Liang X B, Wang J. A recurrent neural network for nonlinear optimization with a continuosly differentiable objective function and bound constraints[J]. IEEE Transactions on Neural Networks, 2000, 11(6):1251–1262.
Xia Youshen, Leung Henry, Wang Jun. A projection neural network and its application to constrained optimization problems[J]. IEEE Trans Circuits Syst (I), 2002, 49(4):447–458.
Xia Youshen, Wang Jun. A recurrent neural network for solving linear projection equations[J]. Neural Networks, 2000, 13(3):337–350.
Liang X B. A recurrent neural network for nonlinear continuously differentiable optimization over a compact convex subset[J]. IEEE Transactions on Neural Networks, 2001, 12(6):1487–1490.
Liang X B. Qualitative analysis of a recurrent neural network for nonlinear continuously differentiable convex minimization over a nonempty closed convex subset[J]. IEEE Transactions on Neural Networks, 2001, 12(6):1521–1525.
Kinderlehrer D, Stampcchia G. An Introduction to Variational Inequalities and Their Applications[M]. Academic, New York, 1980.
Courant R, John F. Introduction to Calculus and Analysis, Volume 1[M]. Wiley, New York, 1989.
Fischer A. An NCP-function and its use for the solution of complementarity problems[C]. In: Du D, Qi L, Womersley R (eds). Recent Advance in Nonsmooth Optimization. World Scientific Publishers, New Jersey, 1995, 88–105.
Fischer A. New constrained optimization reformulation of complementarity problems[J]. Joural of Optimization Theory and Applications, 1998, 97(1):105–117.
Qi H, Liao L. A smoothing Newton method for general nonlinear complementarity problems[J]. Computational Optimization Applications, 2000, 17(2/3):231–253.
Kojima M, Shindo S. Extensions of Newton and quasi-Newton methods to systems of PC 1 equations[J]. J Oper Res Soc Jpn, 1986, 29:352–374.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by LIU Zeng-rong
Rights and permissions
About this article
Cite this article
Ma, Rn., Chen, Tp. Recurrent neural network model based on projective operator and its application to optimization problems. Appl Math Mech 27, 543–554 (2006). https://doi.org/10.1007/s10483-006-0415-z
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/s10483-006-0415-z
Key words
- recurrent neural network model
- projective operator
- global convergence
- optimization
- complementarity problems