Feedforward Neural Networks for Nonparametric Regression

  • David Ríos Insua
  • Peter Müller
Part of the Lecture Notes in Statistics book series (LNS, volume 133)


Feed forward neural networks (FFNN) with an unconstrained random number of hidden neurons define flexible non-parametric regression models. In Müller and Rios Insua (1998) we have argued that variable architecture models with random size hidden layer significantly reduce posterior multimodality typical for posterior distributions in neural network models. In this chapter we review the model proposed in Müller and Rios Insua (1998) and extend it to a non-parametric model by allowing unconstrained size of the hidden layer. This is made possible by introducing a Markov chain Monte Carlo posterior simulation scheme using reversible jump (Green 1995) steps to move between different size architectures.


Neural Network Model Hide Node Feed Forward Neural Network Acceptance Probability Reversible Jump 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bernardo, J.M. and Smith A.F.M. (1994). Bayesian Theory, Wiley, New York.MATHCrossRefGoogle Scholar
  2. Bishop, C.M. (1996). Neural Networks for Pattern Recognition, Oxford University Press, Oxford.MATHGoogle Scholar
  3. Buntine, W.L. and Weigend, A.S. (1991). Bayesian back-propagation. Complex Systems, 5, 603–643.MATHGoogle Scholar
  4. Cheng, B. and Titterington, D.M. (1994). Neural networks: a review from a statistical perspective (with discussion). Statistical Science, 9, 2–54.MathSciNetMATHCrossRefGoogle Scholar
  5. Cybenko, G. (1989). Approximation by superposition of sigmoidal func-tions, Mathematics of Control Systems and Signals, 2, 303–314.MathSciNetMATHCrossRefGoogle Scholar
  6. De Veaux, R., Ungar, L. (1997). A brief introduction to neural networks, Technical Report, Williams College, Williamstown, MA.Google Scholar
  7. Green, P. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82, 711–732.MathSciNetMATHCrossRefGoogle Scholar
  8. Hornik, K., Stinchcombe, M. and White, H. (1989). Multilayer feedforward neural networks are universal approximators, Neural Networks, 2, 359–366.CrossRefGoogle Scholar
  9. Lippman, R. P. (1987). An introduction to computing with neural nets, IEEE ASSP Magazine, 4–22.Google Scholar
  10. MacKay, D.J.C. (1992). A practical Bayesian framework for backprop networks. Neural Computation, 4, 448–472.CrossRefGoogle Scholar
  11. Müller, P. and Rios Insua, D. (1998). Issues in Bayesian Analysis of Neural Network Models, Neural Computation, 10, 571–592.CrossRefGoogle Scholar
  12. Neal, R. M. (1996). Bayesian Learning for Neural Networks, New York: Springer-Verlag.MATHCrossRefGoogle Scholar
  13. Richardson, S., and Green, P. (1997). Modelling and Computation for Mixture Problems (with discussion) Journal of the Royal Statistical Society, B, 59, 731–792.MathSciNetMATHCrossRefGoogle Scholar
  14. Rios Insua, D., Rios Insua, S. and Martin, J. (1997). Simulacion, RA-MA, Madrid.Google Scholar
  15. Rios Insua, D., Salewicz, K.A., Müller, P. and Bielza, C. (1997). Bayesian methods in reservoir operations, in The Practice of Bayesian Analysis (eds: French, Smith), pp. 107–130, Wiley, New York.Google Scholar
  16. Ripley, B.D. (1993). Statistical aspects of neural networks. In Networks and Chaos (eds: Barndorf-Nielsen, Jensen, Kendall), London: Chapman and Hall.Google Scholar
  17. Rumelhart, D.E. and Mclelland, J.L. (eds) (1986). Parallel Distributed Processing, MIT Press, Cambridge.Google Scholar
  18. Stern, H.S. (1996). Neural networks in applied statistics, Technometrics, 38, 205–220.MathSciNetMATHCrossRefGoogle Scholar
  19. Tierney, L. (1994). Markov chains for exploring posterior distributions. Annals of Statistics, 22, 1701–1762.MathSciNetMATHCrossRefGoogle Scholar
  20. West, M., Harrison, J. (1996). Bayesian Forecasting and Dynamic Linear Models, 2nd ed, New York: Springer-Verlag.Google Scholar
  21. Warner, B., Misra, M. (1996). Understanding neural networks as statistical tools, The American Statistician, 50, 284–293.Google Scholar

Copyright information

© Springer Science+Business Media New York 1998

Authors and Affiliations

  • David Ríos Insua
  • Peter Müller

There are no affiliations available

Personalised recommendations