A New Min-Max Optimisation Approach for Fast Learning Convergence of Feed-Forward Neural Networks

  • A. Chella
  • A. Gentile
  • F. Sorbello
  • A. Tarantino
Conference paper


One of the most critical aspect for a wide use of neural networks to real world problems is related to the learning process which is known to be computational expensive and time consuming.

In this paper we propose a new approach to the problem of the learning process based on optimisation point of view. The developed algorithm is a minimax method based on a combination of the quasi-Newton and the Steepest descent methods: it was previously successfully applied in other areas and shows a faster convergence rate when compared with the classical learning rules.

The optimum point is reached by minimising the maximum of the error functions of the network without requiring any tuning of internal parameters.

Moreover, the proposed algorithm allows to obtain useful information about the size of the initial values of the weights by simple observations on the Gramian matrix associated to the network.

The algorithm has been tested on several widespread benchmarks. The proposed algorithm shows superior properties than the backpropagation either in terms of convergence rate and in terms of easiness of use; its performances are highly competitive when compared with the other learning methods available in literature.

Significant simulation results are also reported in the paper.


Error Function Maximum Error Character Recognition Descent Direction Internal Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Kramer A H and A. Sangiovanni-Vincentelli ‘Efficient Parallel Learning Algorithm for Neural Networks’, in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems, Morgan Kaufmann, 1989.Google Scholar
  2. [2]
    Di Maio B and Sorbello F ‘A Simple Algorithm for Min-Max Network Optimization’, Alta Frequenza, 5, vol. LVII, 1988.Google Scholar
  3. [3]
    Bandler J W ‘Conditions for Minimax Optimum’, IEEE Trans. on CT, July 1971Google Scholar
  4. [4]
    Chella A and Sorbello F ‘Fast Convergence of Neural Networks by Application of a New Min-Max Algorithm’, in I. Alexander, J. Taylor (eds.), Artificial Neural Networks, 2, North-Holland Pubblisher, 1992Google Scholar
  5. [5]
    Widrow B M Lehar A ‘30 Year of Adaptive Neural Networks: Perceprton, Madaline, and Backpropagation’, Proceedings of the IEEE, 78, 9, 1990.CrossRefGoogle Scholar
  6. [6]
    Fahlman S E ‘Faster Learning Variations on Backpropagation: an Empirical Study’, in D.S. Touretzky G Hinton and Sejnowski (eds.), Proc. 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988.Google Scholar
  7. [7]
    Jacobs R A ‘Increased Rates of Convergence Through Learning Rate Adaptation’, Neural Networks, I, 1988.Google Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • A. Chella
    • 1
  • A. Gentile
    • 1
  • F. Sorbello
    • 1
  • A. Tarantino
    • 1
  1. 1.DIE — Department of Electrical EngineeringUniversity of PalermoPalermoItaly

Personalised recommendations