Advertisement

An Immunological Approach to Initialize Feedforward Neural Network Weights

  • Leandro Nunes de Castro
  • Fernando J. Von Zuben

Abstract

The initial weight vector to be used in supervised learning for multilayer feedforward neural networks has a strong influence in the learning speed and in the quality of the solution obtained after convergence. An inadequate initial choice may cause the training process to get stuck in a poor local minimum, or to face abnormal numerical problems. In this paper, we propose a biologically inspired method based on artificial immune systems. This new strategy is applied to several benchmark and real-world problems, and its performance is compared to that produced by other approaches already suggested in the literature.

Keywords

Feedforward Neural Network Simulated Annealing Algorithm Artificial Immune System Antibody Repertoire Immunological Approach 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Kolen, J. F. & Pollack, J. B.: Back Propagation is Sensitive to Initial Conditions, Technical Report TR 90-JK-BPSIC, 1990.Google Scholar
  2. [2]
    Shepherd, A. J.: Second-Order Methods for Neural Networks — Fast and Reliable Methods for Multi-Layer Perceptrons, Springer, 1997.Google Scholar
  3. [3]
    Hertz, J., Krogh, A. & Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company, 1991.Google Scholar
  4. [4]
    Janeway Jr., C. A & P. Travers: Immunobiology The Immune System in Health and Disease, garland Publishing Inc., N.Y., 2nd ed., 1994.Google Scholar
  5. [5]
    Haykin S.: Neural Networks — A Comprehensive Foundation, Prentice Hall, 2nd ed., 1999.Google Scholar
  6. [6]
    Kirkpatrick, S., Gelatt Jr., C. D. & Vecchi, M. P.: Optimization by Simulated Annealing, Science, 220(4598), 671–680, (1987).Google Scholar
  7. [7]
    Perelson, A S. & Oster, G. F.: Theoretical Studies of Clonal Selection: Minimal Antibody Repertoire Size and Reliability of Self-Nonself Discrimination, J. theor. Biol., 81, 645–670, (1979).Google Scholar
  8. [8]
    Smith, D. J., Forrest, S., Hightower, R. R. & Perelson, S. A.: Deriving Shape Space Parameters from Immunological Data, J. theor. Biol., 189, 141–150, (1997).Google Scholar
  9. [9]
    Boers, E. G. W. & Kuiper, H.: Biological Metaphors and the Design of Modular Artificial Neural Networks Master Thesis, Leiden University, Netherlands, (1992).Google Scholar
  10. [10]
    Nguyen, D. & Widrow, B.: Improving the Learning Speed of Two-Layer Neural Networks by Choosing Initial Values of the Adaptive Weights, Proc. IJCN’90, 3, 21–26, (1990). Master Thesis, Leiden University, Netherlands, (1992).Google Scholar
  11. [11]
    Kim, Y. K. & Ra, J. B.: Weight Value Initialization for Improving Training Speed in the Backpropagation Network, Proc. ofIJCNN’91, 3, 2396–2401, (1991).Google Scholar
  12. [12]
    Lehtokangas, M., Saarinen, J., Kaski, K. & Huuhtanen, P.: Initializing Weights of a Multilayer Perceptron by Using the Orthogonal Least Squares Algorithm, NEUROCOM, 7,982–999, (1995).Google Scholar
  13. [13]
    De Castro, L. N. & Von Zuben F. J.: A Hybrid Paradigm for Weight Initialization in Supervised Feedforward Neural Network Learning, Proc. ICS’98, Workshop on Artificial Intelligence, 30–37, (1998).Google Scholar
  14. [14]
    Barreiros, J. A. L., Ribeiro, R. R. P., Affonso, C. M. & Santos, E. P.:Estabilizador de Sistemas de Potenciě Adaptativo com Pré-Programação de Parâmetros e Rede Neural Artificial, LAC: EGT, 538–542, (1997).Google Scholar
  15. [15]
    DeCastro, L. N., Von Zuben, F. J. & Martins, W.: Hybrid and Constructive Neural Networks Applied to a Prediction Problem in Agriculture. Proc. of IJCNN’98 3,1932–1936, (1998).Google Scholar
  16. [16]
    ftp://ftp.ics.uci.edu/pub/machine-learning-data basesGoogle Scholar
  17. [17]
    Fahlman, S. E.: An Empirical Study of Learning Speed in Back-Propagation Networks, Tech. Rep., CMU-CS-88-162, Carnegie Mellon University, Pittsburg, (1988).Google Scholar
  18. [18]
    Moller, M. F.: A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning, Neural Networks, 6, 525–533, (dy1993).Google Scholar
  19. [19]
    Pearlmutter, B. A: Fast Exact Calculation by the Hessian, NEUROCOM, 6,147–160, (1994).Google Scholar

Copyright information

© Springer-Verlag Wien 2001

Authors and Affiliations

  • Leandro Nunes de Castro
  • Fernando J. Von Zuben
    • 1
  1. 1.School of Electrical and Computer EngineeringState University of CampinasBrazil

Personalised recommendations