Skip to main content
Log in

Recurrent neural network model based on projective operator and its application to optimization problems

  • Published:
Applied Mathematics and Mechanics Aims and scope Submit manuscript

Abstract

The recurrent neural network (RNN) model based on projective operator was studied. Different from the former study, the value region of projective operator in the neural network in this paper is a general closed convex subset of n-dimensional Euclidean space and it is not a compact convex set in general, that is, the value region of projective operator is probably unbounded. It was proved that the network has a global solution and its solution trajectory converges to some equilibrium set whenever objective function satisfies some conditions. After that, the model was applied to continuously differentiable optimization and nonlinear or implicit complementarity problems. In addition, simulation experiments confirm the efficiency of the RNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Hopfield J J, Tank D W. Neural computation of decision in optimization problem[J]. Biol Cybern, 1985, 52(1):141–152.

    MathSciNet  Google Scholar 

  2. Tank D W, Hopfield J J. Simple ‘neural’ optimization networks:an A/D converter, signal decision circuit, and a linear programming circuit[J]. IEEE Trans Circuits Syst (I), 1988, 35(5):554–562.

    Google Scholar 

  3. Bouzerdoum A, Pattison T R. Neural network for quadratic otimization with bound constraints[J]. IEEE Transactions on Neural Networks, 1993, 4(2):293–303.

    Article  Google Scholar 

  4. Perez-Ilzarbe M J. Convergence analysis of a discrete-time recurrent neural network to perform quadratic real optimization with bound constraints[J]. IEEE Transactions on Neural Networks, 1998, 9(6):1344–1351.

    Article  Google Scholar 

  5. Liang X B, Wang J. A recurrent neural network for nonlinear optimization with a continuosly differentiable objective function and bound constraints[J]. IEEE Transactions on Neural Networks, 2000, 11(6):1251–1262.

    Google Scholar 

  6. Xia Youshen, Leung Henry, Wang Jun. A projection neural network and its application to constrained optimization problems[J]. IEEE Trans Circuits Syst (I), 2002, 49(4):447–458.

    MathSciNet  Google Scholar 

  7. Xia Youshen, Wang Jun. A recurrent neural network for solving linear projection equations[J]. Neural Networks, 2000, 13(3):337–350.

    Article  MathSciNet  Google Scholar 

  8. Liang X B. A recurrent neural network for nonlinear continuously differentiable optimization over a compact convex subset[J]. IEEE Transactions on Neural Networks, 2001, 12(6):1487–1490.

    Google Scholar 

  9. Liang X B. Qualitative analysis of a recurrent neural network for nonlinear continuously differentiable convex minimization over a nonempty closed convex subset[J]. IEEE Transactions on Neural Networks, 2001, 12(6):1521–1525.

    Google Scholar 

  10. Kinderlehrer D, Stampcchia G. An Introduction to Variational Inequalities and Their Applications[M]. Academic, New York, 1980.

    Google Scholar 

  11. Courant R, John F. Introduction to Calculus and Analysis, Volume 1[M]. Wiley, New York, 1989.

    Google Scholar 

  12. Fischer A. An NCP-function and its use for the solution of complementarity problems[C]. In: Du D, Qi L, Womersley R (eds). Recent Advance in Nonsmooth Optimization. World Scientific Publishers, New Jersey, 1995, 88–105.

    Google Scholar 

  13. Fischer A. New constrained optimization reformulation of complementarity problems[J]. Joural of Optimization Theory and Applications, 1998, 97(1):105–117.

    MATH  Google Scholar 

  14. Qi H, Liao L. A smoothing Newton method for general nonlinear complementarity problems[J]. Computational Optimization Applications, 2000, 17(2/3):231–253.

    MathSciNet  Google Scholar 

  15. Kojima M, Shindo S. Extensions of Newton and quasi-Newton methods to systems of PC 1 equations[J]. J Oper Res Soc Jpn, 1986, 29:352–374.

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ma Ru-ning Doctor  (马儒宁).

Additional information

Communicated by LIU Zeng-rong

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ma, Rn., Chen, Tp. Recurrent neural network model based on projective operator and its application to optimization problems. Appl Math Mech 27, 543–554 (2006). https://doi.org/10.1007/s10483-006-0415-z

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10483-006-0415-z

Key words

Chinese Library Classification

2000 Mathematics Subject Classification

Navigation