Li-Function Activated Zhang Neural Network for Online Solution of Time-Varying Linear Matrix Inequality

Abstract

In the previous work, a typical recurrent neural network termed Zhang neural network (ZNN) has been developed for various time-varying problems solving. Based on the previous work, by exploiting a special activation function (i.e., Li activation function), the resultant ZNN model is presented and investigated in this paper for online solution of time-varying linear matrix inequality (TVLMI). For such a Li-function activated ZNN (LFAZNN) model, theoretical results are provided to show its excellent computational performance on solving the TVLMI. That is, the presented LFAZNN model has the property of finite-time convergence. Comparative simulation results with two illustrative examples further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. 1.

    Kim E, Kang HJ, Park M (1999) Numerical stability analysis of fuzzy control systems via quadratic programming and linear matrix inequalities. IEEE Trans Syst, Man, Cybern, Part A 29(4):333–346

    Article  Google Scholar 

  2. 2.

    Chesi G, Garulli A, Tesi A, Vicino A (2003) Solving quadratic distance problems: An LMI-based approach. IEEE Trans Autom Control 48(2):200–212

    MathSciNet  Article  Google Scholar 

  3. 3.

    Chesi G (2010) LMI techniques for optimization over polynomials in control: A survey. IEEE Trans Autom Control 55(11):2500–2510

    MathSciNet  Article  Google Scholar 

  4. 4.

    Jing X (2012) Robust adaptive learning of feedforward neural networks via LMI optimizations. Neural Netw 31:33–45

    Article  Google Scholar 

  5. 5.

    Boyd S, Ghaoui LE, Feron E, Balakrishnan V (1994) Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia

    Google Scholar 

  6. 6.

    Liao X, Chen G, Sanchez EN (2002) Delay-dependent exponential stability analysis of delayed neural networks: An LMI approach. Neural Netw 15(7):855–866

    Article  Google Scholar 

  7. 7.

    Lin C-L, Lai C-C, Huang T-H (2000) A neural network for linear matrix inequality problems. IEEE Trans Neural Netw 11(5):1078–1092

    Article  Google Scholar 

  8. 8.

    Lin C-L, Huang T-H (2000) A novel approach solving for linear matrix inequalities using neural networks. Neural Process Lett 11(2):153–169

    Article  Google Scholar 

  9. 9.

    Cheng L, Hou ZG, Tan M (2009) A simplified neural network for linear matrix inequality problems. Neural Process Lett 29(3):213–230

    Article  Google Scholar 

  10. 10.

    Su T-J, Huang M-Y, Hou C-L, Lin Y-J (2010) Cellular neural networks for gray image noise cancellation based on a hybrid linear matrix inequality and particle swarm optimization approach. Neural Process Lett 32(2):147–165

    Article  Google Scholar 

  11. 11.

    Tan M (2016) Stabilization of coupled time-delay neural networks with nodes of different dimensions. Neural Process Lett 43(1):255–268

    Article  Google Scholar 

  12. 12.

    Wu H, Zhang X, Xue S, Wang L, Wang Y (2016) LMI conditions to global Mittag-Leffler stability of fractional-order neural networks with impulses. Neurocomputing 193:148–154

    Article  Google Scholar 

  13. 13.

    Guo D, Zhang Y (2014) Zhang neural network for online solution of time-varying linear matrix inequality aided with an equality conversion. IEEE Trans Neural Netw Learn Syst 25(2):370–382

    Article  Google Scholar 

  14. 14.

    Hu X (2010) Dynamic system methods for solving mixed linear matrix inequalities and linear vector inequalities and equalities. Appl Math Comput 216(4):1181–1193

    MathSciNet  MATH  Google Scholar 

  15. 15.

    Zhang Y, Guo D (2015) Zhang Functions and Various Models. Springer-Verlag, Heidelberg

    Google Scholar 

  16. 16.

    Zhang Y, Yi C (2011) Zhang Neural Networks and Neural-Dynamic Method. Nova Science Publishers, New York

    Google Scholar 

  17. 17.

    Guo D, Zhang Y (2012) A new variant of the Zhang neural network for solving online time-varying linear inequalities. Proc R Soc A. 468(2144):2255–2271

    MathSciNet  Article  Google Scholar 

  18. 18.

    Zhang Y, Jiang D, Wang J (2002) A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans Neural Netw 13(5):1053–1063

    Article  Google Scholar 

  19. 19.

    Sun J, Wang S, Wang K (2016) Zhang neural networks for a set of linear matrix inequalities with time-varying coefficient matrix. Inf Process Lett 116(10):603–610

    Article  Google Scholar 

  20. 20.

    Li S, Chen S, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by usinga sign-bi-power activation function. Neural Process Lett 37(2):189–205

    Article  Google Scholar 

  21. 21.

    Zhang Y, Ding Y, Qiu B, Zhang Y, Li X (2017) Signum-function array activated ZNN with easier circuit implementation and finite-time convergence for linear systems solving. Inf Process Lett 124:30–34

    MathSciNet  Article  Google Scholar 

  22. 22.

    Jin L, Li S, Wang H, Zhang Z (2018) Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl Soft Comput 62:840–850

    Article  Google Scholar 

  23. 23.

    Xiao L, Liao B, Li S, Zhang Z, Ding L, Jin L (2018) Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans Ind Inform 14(1):98–105

    Article  Google Scholar 

  24. 24.

    Xiao L, Liao B, Li S, Chen K (2018) Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 98:102–113

    Article  Google Scholar 

  25. 25.

    Liu S, Trenkler G (2008) Hadamard, Khatri-Rao, Kronecker and other matrix products. Int J Coop Inf Syst 4(1):160–177

    MathSciNet  MATH  Google Scholar 

  26. 26.

    Mead C (1989) Analog VLSI and Neural Systems. Addison-Wesley, Boston, USA

    Google Scholar 

  27. 27.

    Xiao L, Li S, Yang J, Zhang Z (2018) A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285:125–132

    Article  Google Scholar 

  28. 28.

    Lv X, Xiao L, Tan Z (2019) Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. Inf Process Lett 147:88–93

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editors and anonymous reviewers for their valuable suggestions and constructive comments which have really helped the authors improve very much the presentation and quality of this paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Dongsheng Guo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by the National Natural Science Foundation of China with number 61603143, the Quanzhou City Science and Technology Program of China with number 2018C111R, the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University with number ZQN-YX402, and the Scientific Research Funds of Huaqiao University with number 15BS410.

Appendix: Procedure of ZNN Design

Appendix: Procedure of ZNN Design

In this appendix, the general procedure of the ZNN design [13, 15,16,17,18,19] for time-varying problems solving is presented.

Specifically, as to a time-varying problem \(F(t)=0\in R^{m\times n}\) to be solved, the following matrix/vector-valued error function \(E(t)\in R^{m\times n}\) is firstly defined:

$$\begin{aligned} E(t):=F(t). \end{aligned}$$

Next, the time derivative \({\dot{E}}(t)\) of E(t) is selected such that E(t) can globally and exponentially converge to zero. More specifically, \({\dot{E}}(t)\) can be selected via the ZNN design formula [13, 15,16,17,18,19] as follows:

$$\begin{aligned} {\dot{E}}(t)=-\gamma {\mathcal {F}}(E(t)), \end{aligned}$$
(11)

where \(\gamma >0\in R\) denotes the parameter that is used to scale the convergence rate of the solution, and \({\mathcal {F}}(\cdot ): R^{m\times n}\rightarrow R^{m\times n}\) denotes the monotonically-increasing odd activation-function array. Finally, by expanding the ZNN design formula (11), the differential equation of a ZNN model is thus established for solving a specifical time-varying problem.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Guo, D., Lin, X. Li-Function Activated Zhang Neural Network for Online Solution of Time-Varying Linear Matrix Inequality. Neural Process Lett (2020). https://doi.org/10.1007/s11063-020-10291-y

Download citation

Keywords

  • Zhang neural network
  • Li activation function
  • Finite-time convergence
  • Theoretical results
  • Time-varying linear matrix inequality