Skip to main content

Using a Mahalanobis-Like Distance to Train Radial Basis Neural Networks

  • Conference paper
Computational Intelligence and Bioinspired Systems (IWANN 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3512))

Included in the following conference series:

Abstract

Radial Basis Neural Networks (RBNN) can approximate any regular function and have a faster training phase than other similar neural networks. However, the activation of each neuron depends on the euclidean distance between a pattern and the neuron center. Therefore, the activation function is symmetrical and all attributes are considered equally relevant. This could be solved by altering the metric used in the activation function (i.e. using non-symmetrical metrics). The Mahalanobis distance is such a metric, that takes into account the variability of the attributes and their correlations. However, this distance is computed directly from the variance-covariance matrix and does not consider the accuracy of the learning algorithm. In this paper, we propose to use a generalized euclidean metric, following the Mahalanobis structure, but evolved by a Genetic Algorithm (GA). This GA searches for the distance matrix that minimizes the error produced by a fixed RBNN. Our approach has been tested on two domains and positive results have been observed in both cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Moody, J.E., Darken, C.: Fast learning in networks of locally tuned processing units. Neural Computation 1, 281–294 (1989)

    Article  Google Scholar 

  2. Ghosh, J., Nag, A.: An Overview of Radial Basis Function Networks. In: Howlett, R.J., Jain, L.C. (eds.), Physica Verlag, Heidelberg New York (2000)

    Google Scholar 

  3. Broomhead, D.S., Lowe, D.: Multivariable functional interpolation and adaptative networks. Complex Systems 2, 321–355 (1988)

    MATH  MathSciNet  Google Scholar 

  4. Powell, M.: The theory of radial basis function approximation in 1990. Advances in Numerical Analysis 3, 105–210 (1992)

    Google Scholar 

  5. Park, J., Sandberg, I.W.: Universal approximation and radial-basis-function networks. Neural Computation 5, 305–316 (1993)

    Article  Google Scholar 

  6. Atkenson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning. Artificial Intelligence Review 11, 11–73 (1997)

    Article  Google Scholar 

  7. Tou, J.T., Gonzalez, R.C.: Pattern Recognition Principles. Addison-Wesley, Reading (1974)

    MATH  Google Scholar 

  8. Weisberg, S.: Aplied Linear Regression. John Wiley and Sons, New York (1985)

    Google Scholar 

  9. Babiloni, F., Bianchi, L., Semeraro, F., del R-Millan, J., Mourino, J., Cattini, A., Salinari, S., Marciani, M.G., Cincotti, F.: Mahalanobis distance-based classifiers are able to recognize eeg patterns by using few eeg electrodes. In: Engineering in Medicine and Biology Society, Proceedings of the 23rd Annual International Conference of the IEEE, vol. 1, pp. 651–654 (2001)

    Google Scholar 

  10. Holland, J.H.: Adaptation in Natural and Artificial Systems. The University of Michigan Press, MI (1975)

    Google Scholar 

  11. Leonardis, A., Bischof, H.: An efficient mdl-based construction of rbf networks. Neural Networks 11, 963–973 (1998)

    Article  Google Scholar 

  12. Orr, M.J.L.: Introduction to radial basis neural networks. Technical Report. Centre for Cognitive Science, University of Edinburgh (1996)

    Google Scholar 

  13. Yingwei, L., Sundararajan, N., Saratchandran, P.: A sequential learning scheme for function approximation using minimal radial basis function neural networks. Neural Computation 9, 461–478 (1997)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Valls, J.M., Aler, R., Fernández, O. (2005). Using a Mahalanobis-Like Distance to Train Radial Basis Neural Networks. In: Cabestany, J., Prieto, A., Sandoval, F. (eds) Computational Intelligence and Bioinspired Systems. IWANN 2005. Lecture Notes in Computer Science, vol 3512. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11494669_32

Download citation

  • DOI: https://doi.org/10.1007/11494669_32

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26208-4

  • Online ISBN: 978-3-540-32106-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics