Skip to main content

Radial Basis Function Networks

  • Chapter
  • First Online:
Neural Networks and Statistical Learning

Abstract

Learning is an approximation problem, which is closely related to the conventional approximation techniques, such as generalized splines and regularization techniques. The RBF network has its origin in performing exact interpolation of a set of data points in a multidimensional space [81]. The RBF network is a universal approximator, and it is a popular alternative to the MLP, since it has a simpler structure and a much faster training process. Both models are widely used for classification and function approximation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adeney, K. M., & Korenberg, M. J. (2000). Iterative fast orthogonal search algorithm for MDL-based training of generalized single-layer networks. Neural Networks, 13, 787–799.

    Article  Google Scholar 

  2. Albus, J. S. (1975). A new approach to manipulator control: Cerebellar model articulation control (CMAC). Transactions of the ASME: Journal of Dynamic System, Measurement and Control, 97, 220–227.

    Google Scholar 

  3. Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3), 930–945.

    Article  MATH  MathSciNet  Google Scholar 

  4. Benaim, M. (1994). On functional approximation with normalized Gaussian units. Neural Computation, 6(2), 319–333.

    Article  Google Scholar 

  5. Berthold, M. R., & Diamond, J. (1995). Boosting the performance of RBF networks with dynamic decay adjustment. In G. Tesauro, D. S. Touretzky, & T. Leen (Eds.), Advances in neural information processing systems (Vol. 7, pp. 521–528). Cambridge, MA: MIT Press.

    Google Scholar 

  6. Bobrow, J. E., & Murray, W. (1993). An algorithm for RLS identification of parameters that vary quickly with time. IEEE Transactions on Automatic Control, 38(2), 351–354.

    Article  MATH  MathSciNet  Google Scholar 

  7. Borghese, N. A., & Ferrari, S. (1998). Hierarchical RBF networks and local parameter estimate. Neurocomputing, 19, 259–283.

    Article  Google Scholar 

  8. Bors, G. A., & Pitas, I. (1996). Median radial basis function neural network. IEEE Transactions on Neural Networks, 7(6), 1351–1364.

    Article  Google Scholar 

  9. Broomhead, D. S., & Lowe, D. (1988). Multivariable functional interpolation and adaptive networks. Complex Systems, 2, 321–355.

    MATH  MathSciNet  Google Scholar 

  10. Brown, M., Harris, C. J., & Parks, P. (1993). The interpolation capabilities of the binary CMAC. Neural Networks, 6(3), 429–440.

    Article  Google Scholar 

  11. Bugmann, G. (1998). Normalized Gaussian radial basis function networks. Neurocomputing, 20, 97–110.

    Article  Google Scholar 

  12. Cha, I., & Kassam, S. A. (1995). Channel equalization using adaptive complex radial basis function networks. IEEE Journal on Selected Areas in Communications, 13(1), 122–131.

    Article  Google Scholar 

  13. Chen, C. L., Chen, W. C., & Chang, F. Y. (1993). Hybrid learning algorithm for Gaussian potential function networks. IEE Proceedings - D, 140(6), 442–448.

    Google Scholar 

  14. Chen, S., Billings, S. A., Cowan, C. F. N., & Grant, P. M. (1990). Practical identification of NARMAX models using radial basis functions. International Journal of Control, 52(6), 1327–1350.

    Article  MATH  MathSciNet  Google Scholar 

  15. Chen, S., Cowan, C., & Grant, P. (1991). Orthogonal least squares learning algorithm for radial basis function networks. IEEE Transactions on Neural Networks, 2(2), 302–309.

    Article  Google Scholar 

  16. Chen, S., Grant, P. M., & Cowan, C. F. N. (1992). Orthogonal least squares learning algorithm for training multioutput radial basis function networks. IEE Proceedings-F, 139(6), 378–384.

    Google Scholar 

  17. Chen, S. Grant, P. M., McLaughlin, S., & Mulgrew, B. (1993). Complex-valued radial basis function networks. In Proceedings of the 3rd IEE International Conference on Artificial Neural Networks (pp. 148–152). Brighton, UK.

    Google Scholar 

  18. Chen, S., McLaughlin, S., & Mulgrew, B. (1994). Complex-valued radial basis function network, part I: Network architecture and learning algorithms. Signal Processing, 35, 19–31.

    Article  MATH  Google Scholar 

  19. Chen, S., Hong, X., Harris, C. J., & Hanzo, L. (2008). Fully complex-valued radial basis function networks: Orthogonal least squares regression and classification. Neurocomputing, 71, 3421–3433.

    Article  Google Scholar 

  20. Chuang, C. C., Su, S. F., & Hsiao, C. C. (2000). The annealing robust backpropagation (ARBP) learning algorithm. IEEE Transactions on Neural Networks, 11(5), 1067–1077.

    Article  Google Scholar 

  21. Chuang, C. C., Jeng, J. T., & Lin, P. T. (2004). Annealing robust radial basis function networks for function approximation with outliers. Neurocomputing, 56, 123–139.

    Article  Google Scholar 

  22. Ciocoiu, I. B. (2002). RBF networks training using a dual extended Kalman filter. Neurocomputing, 48, 609–622.

    Article  MATH  Google Scholar 

  23. Constantinopoulos, C., & Likas, A. (2006). An incremental training method for the probabilistic RBF network. IEEE Transactions on Neural Networks, 17(4), 966–974.

    Article  Google Scholar 

  24. Dorffner, G. (1994). Unified framework for MLPs and RBFNs: Introducing conic section function networks. Cybernetics and Systems, 25, 511–554.

    Article  MathSciNet  Google Scholar 

  25. Du, K.-L., Huang, X., Wang, M., Zhang, B., & Hu, J. (2000). Robot impedance learning of the peg-in-hole dynamic assembly process. International Journal of Robotics and Automation, 15(3), 107–118.

    Google Scholar 

  26. Esposito, A., Marinaro, M., Oricchio, D., & Scarpetta, S. (2000). Approximation of continuous and discontinuous mappings by a growing neural RBF-based algorithm. Neural Networks, 13, 651–665.

    Article  Google Scholar 

  27. Ferrari, S., Maggioni, M., & Borghese, N. A. (2004). Multiscale approximation with hierarchical radial basis functions networks. IEEE Transactions on Neural Networks, 15(1), 178–188.

    Article  Google Scholar 

  28. Girosi, F., & Poggio, T. (1990). Networks and the best approximation property. Biological Cybernetics, 63, 169–176.

    Article  MATH  MathSciNet  Google Scholar 

  29. Gomm, J. B., & Yu, D. L. (2000). Selecting radial basis function network centers with recursive orthogonal least squares training. IEEE Transactions on Neural Networks, 11(2), 306–314.

    Article  Google Scholar 

  30. Gorinevsky, D. (1997). An approach to parametric nonlinear least square optimization and application to task-level learning control. IEEE Transactions on Automatic Control, 42(7), 912–927.

    Article  MATH  MathSciNet  Google Scholar 

  31. Heiss, M., & Kampl, S. (1996). Multiplication-free radial basis function network. IEEE Transactions on Neural Networks, 7(6), 1461–1464.

    Article  Google Scholar 

  32. Holden, S. B., & Rayner, P. J. W. (1995). Generalization and PAC learning: some new results for the class of generalized single-layer networks. IEEE Transactions on Neural Networks, 6(2), 368–380.

    Article  Google Scholar 

  33. Hong, X., & Billings, S. A. (1997). Givens rotation based fast backward elimination algorithm for RBF neural network pruning. IEE Proceedings - Control Theory and Applications, 144(5), 381–384.

    Google Scholar 

  34. Hou, M., & Han, X. (2010). Constructive approximation to multivariate function by decay RBF neural network. IEEE Transactions on Neural Networks, 21(9), 1517–1523.

    Article  Google Scholar 

  35. Huang, G. B., Saratchandran, P., & Sundararajan, N. (2004). An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) Networks. IEEE Transactions on Systems Man and Cybernetics B, 34(6), 2284–2292.

    Article  Google Scholar 

  36. Huang, G. B., Saratchandran, P., & Sundararajan, N. (2005). A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Transactions on Neural Networks, 16(1), 57–67.

    Article  Google Scholar 

  37. Huang, G.-B., Chen, L., & Siew, C.-K. (2006). Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 17(4), 879–892.

    Article  Google Scholar 

  38. Huang, G.-B., Zhu, Q.-Y., & Siew, C.-K. (2006). Extreme learning machine: Theory and applications. Neurocomputing, 70, 489–501.

    Article  Google Scholar 

  39. Huang, G.-B., Zhu, Q.-Y., Mao, K. Z., Siew, C.-K., Saratchandran, P., & Sundararajan, N. (2006). Can threshold networks be trained directly? IEEE Transactions on Circuits and Systems II, 53(3), 187–191.

    Article  Google Scholar 

  40. Jankowski, N., & Kadirkamanathan, V. (1997). Statistical control of growing and pruning in RBF-like neural networks. In Proceedings of the 3rd Conference on Neural Networks and Their Applications (pp. 663–670), Kule, Poland.

    Google Scholar 

  41. Jianping, D., Sundararajan, N., & Saratchandran, P. (2002). Communication channel equalization using complex-valued minimal radial basis function neural networks. IEEE Transactions on Neural Networks, 13(3), 687–696.

    Article  Google Scholar 

  42. Kadirkamanathan, V., & Niranjan, M. (1993). A function estimation approach to sequential learning with neural network. Neural Computation, 5(6), 954–975.

    Google Scholar 

  43. Kadirkamanathan, V. (1994). A statistical inference based growth criterion for the RBF network. In Proceedings of the IEEE Workshop on Neural Networks for Signal Processing (pp. 12–21), Ermioni, Greece.

    Google Scholar 

  44. Kaminski, W., & Strumillo, P. (1997). Kernel orthonormalization in radial basis function neural networks. IEEE Transactions on Neural Networks, 8(5), 1177–1183.

    Article  Google Scholar 

  45. Karayiannis, N. B., & Mi, G. W. (1997). Growing radial basis neural networks: Merging supervised and unsupervised learning with network growth techniques. IEEE Transactions on Neural Networks, 8(6), 1492–1506.

    Article  Google Scholar 

  46. Karayiannis, N. B. (1999). Reformulated radial basis neural networks trained by gradient descent. IEEE Transactions on Neural Networks, 10(3), 657–671.

    Article  Google Scholar 

  47. Karayiannis, N. B., & Xiong, Y. (2006). Training reformulated radial basis function neural networks capable of identifying uncertainty in data classification. IEEE Transactions on Neural Networks, 17(5), 1222–1234.

    Article  Google Scholar 

  48. Kraaijveld, M. A., & Duin, R. P. W. (1991). Generalization capabilities of minimal kernel-based networks. In Proceedings of the International Joint Conference on Neural Networks (Vol. 1, pp. 843–848). Seattle, WA.

    Google Scholar 

  49. Krzyzak, A., & Linder, T. (1998). Radial basis function networks and complexity regularization in function learning. IEEE Transactions on Neural Networks, 9(2), 247–256.

    Article  Google Scholar 

  50. Lan, Y., Soh, Y. C., & Huang, G.-B. (2010). Two-stage extreme learning machine for regression. Neurocomputing, 73, 3028–3038.

    Article  Google Scholar 

  51. Lan, Y., Soh, Y. C., & Huang, G.-B. (2010). Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing, 73, 3191–3199.

    Article  Google Scholar 

  52. Langari, R., Wang, L., & Yen, J. (1997). Radial basis function networks, regression weights, and the expectation-maximization algorithm. IEEE Transactions on Systems, Man, and Cybernetics A, 27(5), 613–623.

    Google Scholar 

  53. Lee, C. C., Chung, P. C., Tsai, J. R., & Chang, C. I. (1999). Robust radial basis function neural networks. IEEE Transactions on Systems, Man, and Cybernetics B, 29(6), 674–685.

    Google Scholar 

  54. Lee, S. J., & Hou, C. L. (2002). An ART-based construction of RBF networks. IEEE Transactions on Neural Networks, 13(6), 1308–1321.

    Article  Google Scholar 

  55. Lee, K. Y., & Jung, S. (1999). Extended complex RBF and its application to M-QAM in presence of co-channel interference. Electronics Letters, 35(1), 17–19.

    Article  Google Scholar 

  56. Lehtokangas, M., Saarinen, J., & Kaski, K. (1995). Accelerating training of radial basis function networks with cascade-correlation algorithm. Neurocomputing, 9, 207–213.

    Article  Google Scholar 

  57. Lehtokangas, M., & Saarinen, J. (1998). Centroid based multilayer perceptron networks. Neural Processing Letters, 7, 101–106.

    Article  Google Scholar 

  58. Leonardis, A., & Bischof, H. (1998). An efficient MDL-based construction of RBF networks. Neural Networks, 11, 963–973.

    Article  Google Scholar 

  59. Leung, C.-S., & Sum, J. P.-F. (2008). A fault-tolerant regularizer for RBF networks. IEEE Transactions on Neural Networks, 19(3), 493–507.

    Article  Google Scholar 

  60. Li, M.-B., Huang, G.-B., Saratchandran, P., & Sundararajan, N. (2005). Fully complex extreme learning machine. Neurocomputing, 68, 306–314.

    Article  Google Scholar 

  61. Liang, N.-Y., Huang, G.-B., Saratchandran, P., & Sundararajan, N. (2006). A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Transactions on Neural Networks, 17(6), 1411–1423.

    Article  Google Scholar 

  62. Liao, Y., Fang, S. C., & Nuttle, H. L. W. (2003). Relaxed conditions for radial-basis function networks to be universal approximators. Neural Networks, 16, 1019–1028.

    Article  MATH  Google Scholar 

  63. Lowe, D. (1995). On the use of nonlocal and non-positive definite basis functions in radial basis function networks. In Proceedings of the IEE International Conference on Artificial Neural Networks (pp. 206–211), Cambridge, UK.

    Google Scholar 

  64. Mahdi, R. N., & Rouchka, E. C. (2011). Reduced hyperBF networks: Regularization by explicit complexity reduction and scaled Rprop-based training. IEEE Transactions on Neural Networks, 22(5), 673–686.

    Article  Google Scholar 

  65. Mao, K. Z. (2002). RBF neural network center selection based on Fisher ratio class separability measure. IEEE Transactions on Neural Networks, 13(5), 1211–1217.

    Article  Google Scholar 

  66. McLoone, S., Brown, M. D., Irwin, G., & Lightbody, G. (1998). A hybrid linear/nonlinear training algorithm for feedforward neural networks. IEEE Transactions on Neural Networks, 9(4), 669–684.

    Article  Google Scholar 

  67. McLoone, S., & Irwin, G. (2001). Improving neural network training solutions using regularisation. Neurocomputing, 37, 71–90.

    Article  Google Scholar 

  68. Micchelli, C. A. (1986). Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, 2, 11–22.

    Article  MATH  MathSciNet  Google Scholar 

  69. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., & Lendasse, A. (2010). OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 21(1), 158–162.

    Article  Google Scholar 

  70. Miller, W. T., Glanz, F. H., & Kraft, L. G. (1990). CMAC: An associative neural network alternative to backpropagation. Proceedings of the IEEE, 78(10), 1561–1567.

    Article  Google Scholar 

  71. Moody, J., & Darken, C. J. (1989). Fast learning in networks of locally-tuned processing units. Neural Computation, 1(2), 281–294.

    Google Scholar 

  72. Musavi, M. T., Ahmed, W., Chan, K. H., Faris, K. B., & Hummels, D. M. (1992). On the training of radial basis function classifiers. Neural Networks, 5(4), 595–603.

    Article  Google Scholar 

  73. Niyogi, P., & Girosi, F. (1999). Generalization bounds for function approximation from scattered noisy data. Advances in Computational Mathematics, 10, 51–80.

    Article  MATH  MathSciNet  Google Scholar 

  74. Oliveira, A. L. I., Melo, B. J. M., & Meira, S. R. L. (2005). Improving constructive training of RBF networks through selective pruning and model selection. Neurocomputing, 64, 537–541.

    Article  Google Scholar 

  75. Orr, M. J. L. (1995). Regularization in the selection of radial basis function centers. Neural Computation, 7(3), 606–623.

    Google Scholar 

  76. Paetz, J. (2004). Reducing the number of neurons in radial basis function networks with dynamic decay adjustment. Neurocomputing, 62, 79–91.

    Article  Google Scholar 

  77. Park, J., & Sanberg, I. W. (1991). Universal approximation using radial-basis-function networks. Neural Computation, 3, 246–257.

    Google Scholar 

  78. Peng, H., Ozaki, T., Haggan-Ozaki, V., & Toyoda, Y. (2003). A parameter optimization method for radial basis function type models. IEEE Transactions on Neural Networks, 14(2), 432–438.

    Article  Google Scholar 

  79. Platt, J. (1991). A resource allocating network for function interpolation. Neural Computation, 3(2), 213–225.

    Google Scholar 

  80. Poggio, T., & Girosi, F. (1990). Networks for approximation and learning. Proceedings of the IEEE, 78(9), 1481–1497.

    Article  Google Scholar 

  81. Powell, M. J. D. (1987). Radial basis functions for multivariable interpolation: A review. In J. C. Mason & M. G. Cox (Eds.), Algorithms for approximation (pp. 143–167). Oxford: Clarendon Press.

    Google Scholar 

  82. Rojas, I., Pomares, H., Bernier, J. L., Ortega, J., Pino, B., & Pelayo, F. J., et al. (2002). Time series analysis using normalized PG-RBF network with regression weights. Neurocomputing, 42, 267–285.

    Google Scholar 

  83. Rong, H. J., Ong, Y.-S., Tan, A.-W., & Zhu, Z. (2008). A fast pruned-extreme learning machine for classification problem. Neurocomputing, 72, 359–366.

    Article  Google Scholar 

  84. Rosipal, R., Koska, M., & Farkas, I. (1998). Prediction of chaotic time-series with a resource-allocating RBF network. Neural Processing Letters, 7, 185–197.

    Article  Google Scholar 

  85. Roy, A., Govil, S., & Miranda, R. (1995). An algorithm to generate radial basis functions (RBF)-like nets for classification problems. Neural Networks, 8(2), 179–201.

    Article  Google Scholar 

  86. Salmeron, M., Ortega, J., Puntonet, C. G., & Prieto, A. (2001). Improved RAN sequential prediction using orthogonal techniques. Neurocomputing, 41, 153–172.

    Article  MATH  Google Scholar 

  87. Sanchez, A. V. D. (1995). Robustization of a learning method for RBF networks. Neurocomputing, 9, 85–94.

    Google Scholar 

  88. Schilling, R. J., & Carroll, J. J, Jr. (2001). Approximation of nonlinear systems with radial basis function neural networks. IEEE Transactions on Neural Networks, 12(1), 1–15.

    Article  Google Scholar 

  89. Schwenker, F., Kestler, H. A., & Palm, G. (2001). Three learning phases for radial-basis-function networks. Neural Networks, 14, 439–458.

    Article  Google Scholar 

  90. Simon, D. (2002). Training radial basis neural networks with the extended Kalman filter. Neurocomputing, 48, 455–475.

    Article  MATH  Google Scholar 

  91. Singla, P., Subbarao, K., & Junkins, J. L. (2007). Direction-dependent learning approach for radial basis function networks. IEEE Transactions on Neural Networks, 18(1), 203–222.

    Article  Google Scholar 

  92. Specht, D. F. (1990). Probabilistic neural networks. Neural Networks, 3, 109–118.

    Article  Google Scholar 

  93. Sum, J. P.-F., Leung, C.-S., & Ho, K. I.-J. (2009). On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Transactions on Neural Networks, 20(1), 124–138.

    Article  Google Scholar 

  94. Suresh, S., Savitha, R., & Sundararajan, N. (2011). A sequential learning algorithm for complex-valued self-regulating resource allocation network-CSRAN. IEEE Transactions on Neural Networks, 22(7), 1061–1072.

    Article  Google Scholar 

  95. Teddy, S. D., Quek, C., & Lai, E. M.-K. (2008). PSECMAC: A novel self-organizing multiresolution associative memory architecture. IEEE Transactions on Neural Networks, 19(4), 689–712.

    Article  Google Scholar 

  96. Titsias, M. K., & Likas, A. (2001). Shared kernel models for class conditional density estimation. IEEE Transactions on Neural Networks, 12(5), 987–997.

    Article  Google Scholar 

  97. Todorovic, B., & Stankovic, M. (2001). Sequential growing and pruning of radial basis function network. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) (pp. 1954–1959), Washington, DC.

    Google Scholar 

  98. Uykan, Z., Guzelis, C., Celebi, M. E., & Koivo, H. N. (2000). Analysis of input-output clustering for determining centers of RBFN. IEEE Transactions on Neural Networks, 11(4), 851–858.

    Article  Google Scholar 

  99. Wang, X. X., Chen, S., & Brown, D. J. (2004). An approach for constructing parsimonious generalized Gaussian kernel regression models. Neurocomputing, 62, 441–457.

    Article  Google Scholar 

  100. Webb, A. R. (1994). Functional approximation in feed-forward networks: A least-squares approach to generalization. IEEE Transactions on Neural Networks, 5, 363–371.

    Article  Google Scholar 

  101. Webb, A. R., & Lowe, D. (1990). The optimized internal representation of multilayer classifier networks performs nonlinear discriminant analysis. Neural Networks, 3, 367–375.

    Article  Google Scholar 

  102. Wedge, D., Ingram, D., McLean, D., Mingham, C., & Bandar, Z. (2006). On global-local artificial neural networks for function approximation. IEEE Transactions on Neural Networks, 17(4), 942–952.

    Article  Google Scholar 

  103. Wettschereck, D., & Dietterich, T. (1992). Improving the performance of radial basis function networks by learning center locations. In J. E. Moody, S. J. Hanson & R. P. Lippmann (Eds.), Advances in neural information processing systems (Vol. 4, pp. 1133–1140). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  104. Widrow, B., & Lehr, M. A. (1990). 30 years of adaptive neural networks: Perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9), 1415–1442.

    Article  Google Scholar 

  105. Widrow, B., Greenblatt, A., Kim, Y., & Park, D. (2013). The no-prop algorithm: A new learning algorithm for multilayer neural networks. Neural Networks, 37, 182–188.

    Article  Google Scholar 

  106. Wilamowski, B. M., & Yu, H. (2010). Improved computation for Levenberg-Marquardt training. IEEE Transactions on Neural Networks, 21(6), 930–937.

    Article  Google Scholar 

  107. Xie, T., Yu, H., Hewlett, J., Rozycki, P., & Wilamowski, B. (2012). Fast and efficient second-order method for training radial basis function networks. IEEE Transactions on Neural Networks and Learning Systems, 23(4), 609–619.

    Article  Google Scholar 

  108. Yingwei, L., Sundararajan, N., & Saratchandran, P. (1997). A sequential learning scheme for function approximation by using minimal radial basis function neural networks. Neural Computation, 9(2), 461–478.

    Google Scholar 

  109. Yingwei, L., Sundararajan, N., & Saratchandran, P. (1998). Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm. IEEE Transactions on Neural Networks, 9(2), 308–318.

    Article  Google Scholar 

  110. Yu, D. L., Gomm, J. B., & Williams, D. (1997). A recursive orthogonal least squares algorithm for training RBF networks. Neural Processing Letters, 5, 167–176.

    Article  Google Scholar 

  111. Zhang, Q., & Benveniste, A. (1992). Wavelet networks. IEEE Transactions on Neural Networks, 3(6), 899–905.

    Article  Google Scholar 

  112. Zhang, Q. (1997). Using wavelet networks in nonparametric estimation. IEEE Transactions on Neural Networks, 8(2), 227–236.

    Article  Google Scholar 

  113. Zhang, J., Walter, G. G., Miao, Y., & Lee, W. N. W. (1995). Wavelet neural networks for function learning. IEEE Transactions on Signal Processing, 43(6), 1485–1497.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ke-Lin Du .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag London

About this chapter

Cite this chapter

Du, KL., Swamy, M.N.S. (2014). Radial Basis Function Networks. In: Neural Networks and Statistical Learning. Springer, London. https://doi.org/10.1007/978-1-4471-5571-3_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-5571-3_10

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-5570-6

  • Online ISBN: 978-1-4471-5571-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics