Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3944))

Included in the following conference series:

Abstract

This article describes the competitive associative net called CAN2 and cross-validation which we have used for making prediction and estimating predictive uncertainty on the regression problems at the Evaluating Predictive Uncertainty Challenge. The CAN2 with an efficient batch learning method for reducing empirical (training) error is combined with cross-validation for making prediction (generalization) error small and estimating predictive distribution accurately. From an analogy of Bayesian learning, a stochastic analysis is derived to indicate a validity of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. http://predict.kyb.tuebingen.mpg.de/pages/home.php

  2. Kohonen, T.: Associative Memory. Springer, Heidelberg (1977)

    Book  MATH  Google Scholar 

  3. Rumelhart, D.E., Zipser, D.: A feature discovery by competitive learning. In: Rumelhart, D.E., McClelland, J.L., The PDP Research Group (eds.) Parallel Distributed Processing, vol. 1, pp. 151–193. The MIT Press, Cambridge (1986)

    Google Scholar 

  4. Mueller, P., Insua, D.R.: Issues in Bayesian analysis of neural network models. Neural Computation 10, 571–592 (1995)

    Google Scholar 

  5. Bernardo, J.M., Smith, A.D.M.: Bayesian Theory. John Wiley, New York (1994)

    Book  MATH  Google Scholar 

  6. http://www.cs.toronto.edu/~radford/fbm.software.html

  7. Kurogi, S., Tou, M., Terada, S.: Rainfall estimation using competitive associative net. In: Proc. of 2001 IEICE General Conference, vol. SD-1, pp. 260–261 (2001) (in Japanese)

    Google Scholar 

  8. Kurogi, S.: Asymptotic optimality of competitive associative nets for their learning in function approximation. In: Proc. of the 9th International Conference on Neural Information Processing, vol. 1, pp. 507–511 (2002)

    Google Scholar 

  9. Kurogi, S.: Asymptotic optimality of competitive associative nets and its application to incremental learning of nonlinear functions. Trans. of IEICE D-II J86-D-II(2), 184–194 (2003) (in Japanese)

    Google Scholar 

  10. Kurogi, S., Ueno, T., Sawa, M.: A batch learning method for competitive associative net and its application to function approximation. In: Proc. of SCI 2004, vol. V, pp. 24–28 (2004)

    Google Scholar 

  11. Kurogi, S., Ueno, T., Sawa, M.: Batch learning competitive associative net and its application to time series prediction. In: Proc. of IJCNN 2004, International Joint Conference on Neural Networks, Budapest, Hungary, July 25-29 (2004), CD-ROM

    Google Scholar 

  12. Efron, B.: Estimating the error rate of a prediction rule: improvement on crossvalidation. Journal of the American Statistical Association 78(382), 316–331 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the Fourteenth International Conference 18 on Artificial Intelligence (IJCAI), pp. 1137–1143. Morgan Kaufmann, San Mateo (1995)

    Google Scholar 

  14. Efron, B., Tbshirani, R.: Improvements on cross-validation: the.632+ bootstrap method. Journal of the American Statistical Association 92, 548–560 (1997)

    MathSciNet  MATH  Google Scholar 

  15. Elisseeff, A., Pontil, M.: Leave-one-out error and stability of learning algorithms with applications. In: Advances in Learning Theory: Methods, Models and Applications. NATO Advanced Study Institute on Learning Theory and Practice, pp. 111–130 (2002)

    Google Scholar 

  16. Farmer, J.D., Sidorowich, J.J.: Predicting chaotic time series. Phys. Rev.Lett. 59, 845–848 (1987)

    Article  MathSciNet  Google Scholar 

  17. Friedman, J.H.: Multivariate adaptive regression splines. Ann. Stat. 19, 1–50 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  18. Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kurogi, S., Sawa, M., Tanaka, S. (2006). Competitive Associative Nets and Cross-Validation for Estimating Predictive Uncertainty on Regression Problems. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds) Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment. MLCW 2005. Lecture Notes in Computer Science(), vol 3944. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11736790_6

Download citation

  • DOI: https://doi.org/10.1007/11736790_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-33427-9

  • Online ISBN: 978-3-540-33428-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics