Advertisement

Repeated Measures Multiple Comparison Procedures Applied to Model Selection in Neural Networks

  • Elisa Guerrero Vázquez
  • Andrés Yañez Escolano
  • Pedro Galindo Riaño
  • Joaquín Pizarro Junquera
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2085)

Abstract

One of the main research concern in neural networks is to find the appropriate network size in order to minimize the trade-off between overfitting and poor approximation. In this paper the choice among different competing models that fit to the same data set is faced when statistical methods for model comparison are applied. The study has been conducted to find a range of models that can work all the same as the cost of complexity varies. If they do not, then the generalization error estimates should be about the same among the set of models. If they do, then the estimates should be different and our job would consist on analyzing pairwise differences between the least generalization error estimate and each one of the range, in order to bound the set of models which might result in an equal performance. This method is illustrated applied to polynomial regression and RBF neural networks.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chen, T., Seneta, E.: A stepwise rejective test procedure with strong control of familywise error rate. University of Sidney, School of Mathematics and Statistics, Research Report 99-9, March (1999)Google Scholar
  2. 2.
    Dean A., Voss, D.: Design and Analysis of Experiments. Springer-Verlag New York (1999)zbMATHCrossRefGoogle Scholar
  3. 3.
    Don Lehmkuhl, L: Nonparametric Statistics: Methods for Analyzing Data Not Meeting Assumptions Required for the Application of Parametric Tests, Journal of Prosthetics and Orthotics, 3(8) 105–113 (1996)CrossRefGoogle Scholar
  4. 4.
    Efron, B., Tibshirani, R.: Introduction to the Bootstrap, Chapman & Hall, (1993)Google Scholar
  5. 5.
    Girden, E.R.: Anova Repeated Measures, Sage Publications (1992)Google Scholar
  6. 6.
    Hochberg, Y., Tamhane A.C.: Multiple Comparison Procedures, Wiley (1987)Google Scholar
  7. 7.
    Hollander, M., Wolfe, D.A.: Nonparametric Statistical Methods, Wiley (1999)Google Scholar
  8. 8.
    Kearns, M., Mansour, Y.: An experimental and theorical comparison of model selection methods. Machine Learning, 27(1), (1997)Google Scholar
  9. 9.
    Minke, A.: Conducting Repeated Measures Analyses: Experimental Design Considerations, Annual Meeting of the Southwest Educational Research Association, Austin, (1997)Google Scholar
  10. 10.
    Pizarro, J., Guerrero, E., Galindo, P.: A statistical model selection strategy applied to neural networks. Proceedings of the European Symposium on Artificial Neural Networks Vol 1, pp. 55–60, Bruges (2000)Google Scholar
  11. 11.
    Vila, J.P., Wagner, V., Neveu, P.:Bayesian nonlinear model selection and neural networks: a conjugate prior approach. IEEE Transactions on neural networks, vol 11,2, march (2000)Google Scholar
  12. 12.
    Zar, J.H.: Biostatistical Analysis, Prentice Hall (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Elisa Guerrero Vázquez
    • 1
  • Andrés Yañez Escolano
    • 1
  • Pedro Galindo Riaño
    • 1
  • Joaquín Pizarro Junquera
    • 1
  1. 1.Departamento de Lenguajes y Sistemas Informáticos, Grupo de Investigación “Sistemas Inteligentes de Computación”Universidad de CádizPuerto RealSpain

Personalised recommendations