Unsupervised and Supervised Learning in Radial-Basis-Function Networks

  • Friedhelm Schwenker
  • Hans A. Kestler
  • Günther Palm
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 78)


Learning in radial basis function (RBF) networks is the topic of this chapter. Whereas multilayer perceptrons (MLP) are typically trained with backpropagation algorithms, starting the training procedure with a random initalization of the MLP’s parameters, an RBF network may be trained in different ways. We distinguish one-, two-, and three phase learning.

A very common learning scheme for RBF networks is two phase learning. Here, the two layers of an RBF network are trained separately. First the RBF layer is calculated, including the RBF centers and scaling parameters, and then the weights of the output layer are adapted. The RBF centers may be trained through unsupervised or supervised learning procedures utilizing clustering, vector quantization or classification tree algorithms. The output layer of the network is adapted by supervised learning. Numerical experiments of RBF classifiers trained by two phase learning are presented for the classification of 3D visual objects and the recognition of hand-written digits. It can be observed that the performance of RBF classifiers trained with two phase learning can be improved through a third backpropagation-like learning phase of the RBF network, adapting the whole set of parameters (RBF centers, scaling parameters, and output layer weights) simultaneously. This, we call three phase learning in RBF networks. A practical advantage of two and three phase learning in RBF networks is the possibility to use unlabeled training data for the first training phase.

Support vector (SV) learning in RBF networks is a special type of one phase learning, where only the output layer weights of the RBF network are calculated, and the RBF centers are restricted to be a subset of the training data.

Numerical experiments with several classifier schemes including nearest neighbor classifiers, learning vector quantization networks and RBF classifiers trained through two phase, three phase and support vector learning are given. The performance of the RBF classifiers trained through SV learning and three phase learning are superior to the results of two phase learning.


Radial Basis Function Supervise Learn Phase Learning Radial Basis Function Network Kernel Width 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography on Chapter 10

  1. 1.
    R. Basri. Recognition by prototypes. International Journal of Computer Vision, 19: 147–168, 1996.CrossRefGoogle Scholar
  2. 2.
    C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995.Google Scholar
  3. 3.
    R. Brooks. Model-based three-dimensional interpreations of two-dimensional images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5: 140–149, 1983.CrossRefGoogle Scholar
  4. 4.
    D.S. Broomhead and D. Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2: 321–355, 1988.MathSciNetMATHGoogle Scholar
  5. 5.
    H.H. Bülthoff, S. Edelman, and M.J. Tarr. How are three-dimensional objects represented in the brain? Cerebal Cortex, 5: 247–260, 1995.CrossRefGoogle Scholar
  6. 6.
    N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines. Cambridge University Press, 2000.Google Scholar
  7. 7.
    C. Darken and J. Moody. Fast adaptive k-means clustering: Some empirical results. Proceedings International Joint Conference on Neural Networks, 1990.Google Scholar
  8. 8.
    S. Edelman and H.H. Bülthoff. Orientation dependence in the recognition of familiar and novel views of three-dimensional objects. Vision Research, 32: 2385–2400, 1992.CrossRefGoogle Scholar
  9. 9.
    S. Edelman and S. Duvdevani-Bar. A model of visual recognition and categorization. Phil. Trans. R. Soc. London B, 352: 1191–1202, 1997.CrossRefGoogle Scholar
  10. 10.
    O. Ghitza. Auditory nerve representation as a basis for speech recognition. In S. Furui and M. Sondhi, editors, Advances in Speech Signal Processing, pages 453–485. Marcel Dekker, NY, 1991.Google Scholar
  11. 11.
    F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural network architectures. Neural Computation, 7: 219–269, 1995.CrossRefGoogle Scholar
  12. 12.
    J. Hertz, A. Krogh, and R. G. Palmer. Introduction to the Theory of Neural Computation. Addison Wesley, New York, 1991.Google Scholar
  13. 13.
    T. Kohonen. The self-organizing map. Proc. IEEE, 78 (9): 1464–1480, 1990.CrossRefGoogle Scholar
  14. 14.
    T. Kohonen. Self-Organizing Maps. Springer, 1995.Google Scholar
  15. 15.
    M. Lades, J. Vorbrtiggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. Wurtz, and W. Konen. Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on Computers, 42: 300–311, 1993.CrossRefGoogle Scholar
  16. 16.
    W.A. Light. Some aspects of radial basis function approximation. In S.P. Singh, editor, Approximation Theory, Spline Functions and Applications, pages 163–190. Kluwer, 1992.Google Scholar
  17. 17.
    T.-J. Lim, W.-Y. Loh, and Y-S-. Shih. A comparision of prediction accuracy, complexity, and training time of thirty-tree old and new classification algorithms. Machine Learning, pages 1–27, 2000.Google Scholar
  18. 18.
    J.J. Little, T. Poggio, and E.B. Gamble. Seeing in parallel: The vision machine. International Journal of Supercomputing Applications, 2: 13–28, 1988.CrossRefGoogle Scholar
  19. 19.
    N.K. Logothetis and D.L. Scheinberg. Visual object recognition. Annual Review of Neuroscience, 19: 577–621, 1996.CrossRefGoogle Scholar
  20. 20.
    D.G. Lowe. Three-dimensional object recognition from single two-dimensional images. Artificial Intelligence, 31: 355–395, 1987.CrossRefGoogle Scholar
  21. 21.
    D. Man. Vision. Freeman, San Fransisco, 1982.Google Scholar
  22. 22.
    D. Man and H.K. Nishihara. Representation and recognition of the spatial organization of three dimensional structure. Proceedings of the Royal Society of London B, 200: 269–294, 1978.Google Scholar
  23. 23.
    B. Mel. Seemore: Combining colour, shape, and texture histogramming in a neurally-inspired approach to visual object recognition. Neural Computation, 9: 777–804, 1997.CrossRefGoogle Scholar
  24. 24.
    C.A. Micchelli. Interpolation of Scattered Data: Distance Matrices and Conditionally Positive Definite Functions. Constructive Approximation, 2: 11–22, 1986.MathSciNetCrossRefMATHGoogle Scholar
  25. 25.
    D. Michie, D.J. Spiegelhalter, and C.C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis Horwood, 1994.Google Scholar
  26. 26.
    J. Moody and C. J. Darken. Fast Learning in Networks of locally-tuned Processing Units. Neural Computation, 1: 284–294, 1989.CrossRefGoogle Scholar
  27. 27.
    H. Murase and S. Nayar. Visual learning and recognition of 3d objects from appearance. International Journal of Computer Vision, 14: 5–24, 1995.CrossRefGoogle Scholar
  28. 28.
    C. Papageorgiou and T. Poggio. A trainable system for object detection. International Journal of Computer Vision, 38: 15–33, 2000.CrossRefMATHGoogle Scholar
  29. 29.
    J. Park and I. W. Sandberg. Approximation and Radial Basis Function Networks. Neural Computation, 5: 305–316, 1993.CrossRefGoogle Scholar
  30. 30.
    T. Poggio and S. Edelman. A network that learns to recognize tree-dimensional objects. Nature, 343: 263–266, 1990.CrossRefGoogle Scholar
  31. 31.
    T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78: 1481–1497, 1990.CrossRefGoogle Scholar
  32. 32.
    T. Poggio and F. Girosi. Regularization algorithms for learning that are equivalent to multilayer networks. Science, 2247: 978–982, 1990.MathSciNetCrossRefGoogle Scholar
  33. 33.
    M. J. D. Powell. The Theory of Radial Basis Function Approximation in 1990. In W. Light, editor, Advances in Numerical Analysis, volume II, pages 105–210. Oxford Science Publications, 1992.Google Scholar
  34. 34.
    L. Rabiner and B.-H. Juang. Fundamentals of Speech Recognition. Prentice Hall, 1993.Google Scholar
  35. 35.
    B.D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, 1996.Google Scholar
  36. 36.
    B. Schiele and J. Crowley. Probabilistic object recognition using multidimensional receptive field histograms. In Proc. of the 13th Int. Conf. on Pattern Recognition, pages 50–54. IEEE Computer Press, 1996.Google Scholar
  37. 37.
    A. Schölkopf, C. Burges, and A. Smola. Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998.Google Scholar
  38. 38.
    F. Schwenker. Hierarchical Support Vector Machines for Multi-Class Pattern Recognition. In R.J. Howlett and L.C. Jain, editors, Knowledge-Based Intelligent Engineering Systems and Aplied Technologies KES 2000, pages 561–565. 2000.Google Scholar
  39. 39.
    F. Schwenker and H.A. Kestler. 3-D Visual Object Classification with Hierarchical RBF Networks. In R.J. Howlett and L.C. Jain, editors, Radial Basis Function Neural Networks: Theory and Applications. Physica-Verlag, 2000 (in press).Google Scholar
  40. 40.
    F. Schwenker, H.A. Kestler, G. Palm, and M. Höher. Similarities of LVQ and RBF learning. In Proc. IEEE Int. Conf. SMC, pages 646–651, 1994.Google Scholar
  41. 41.
    S. Ullman. High-level Vision. Object Recognition and Visual Cognition. The MIT Press, Cambridge, 1996.MATHGoogle Scholar
  42. 42.
    V.N. Vapnik. Statistical Learning Theory. John Wiley and Sons, 1998.Google Scholar
  43. 43.
    P.D. Wasserman. Advanced methods in neural computing. Van Nostrand Reinhold, New York, 1993.MATHGoogle Scholar
  44. 44.
    S.C. Zhu and A.L. Yuille. Forms: A flexible object recognition and modeling system. International Journal of Computer Vision, 20: 1–39, 1996.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Friedhelm Schwenker
  • Hans A. Kestler
  • Günther Palm

There are no affiliations available

Personalised recommendations