Abstract
This article describes the competitive associative net called CAN2 and cross-validation which we have used for making prediction and estimating predictive uncertainty on the regression problems at the Evaluating Predictive Uncertainty Challenge. The CAN2 with an efficient batch learning method for reducing empirical (training) error is combined with cross-validation for making prediction (generalization) error small and estimating predictive distribution accurately. From an analogy of Bayesian learning, a stochastic analysis is derived to indicate a validity of our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Kohonen, T.: Associative Memory. Springer, Heidelberg (1977)
Rumelhart, D.E., Zipser, D.: A feature discovery by competitive learning. In: Rumelhart, D.E., McClelland, J.L., The PDP Research Group (eds.) Parallel Distributed Processing, vol. 1, pp. 151–193. The MIT Press, Cambridge (1986)
Mueller, P., Insua, D.R.: Issues in Bayesian analysis of neural network models. Neural Computation 10, 571–592 (1995)
Bernardo, J.M., Smith, A.D.M.: Bayesian Theory. John Wiley, New York (1994)
Kurogi, S., Tou, M., Terada, S.: Rainfall estimation using competitive associative net. In: Proc. of 2001 IEICE General Conference, vol. SD-1, pp. 260–261 (2001) (in Japanese)
Kurogi, S.: Asymptotic optimality of competitive associative nets for their learning in function approximation. In: Proc. of the 9th International Conference on Neural Information Processing, vol. 1, pp. 507–511 (2002)
Kurogi, S.: Asymptotic optimality of competitive associative nets and its application to incremental learning of nonlinear functions. Trans. of IEICE D-II J86-D-II(2), 184–194 (2003) (in Japanese)
Kurogi, S., Ueno, T., Sawa, M.: A batch learning method for competitive associative net and its application to function approximation. In: Proc. of SCI 2004, vol. V, pp. 24–28 (2004)
Kurogi, S., Ueno, T., Sawa, M.: Batch learning competitive associative net and its application to time series prediction. In: Proc. of IJCNN 2004, International Joint Conference on Neural Networks, Budapest, Hungary, July 25-29 (2004), CD-ROM
Efron, B.: Estimating the error rate of a prediction rule: improvement on crossvalidation. Journal of the American Statistical Association 78(382), 316–331 (1983)
Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the Fourteenth International Conference 18 on Artificial Intelligence (IJCAI), pp. 1137–1143. Morgan Kaufmann, San Mateo (1995)
Efron, B., Tbshirani, R.: Improvements on cross-validation: the.632+ bootstrap method. Journal of the American Statistical Association 92, 548–560 (1997)
Elisseeff, A., Pontil, M.: Leave-one-out error and stability of learning algorithms with applications. In: Advances in Learning Theory: Methods, Models and Applications. NATO Advanced Study Institute on Learning Theory and Practice, pp. 111–130 (2002)
Farmer, J.D., Sidorowich, J.J.: Predicting chaotic time series. Phys. Rev.Lett. 59, 845–848 (1987)
Friedman, J.H.: Multivariate adaptive regression splines. Ann. Stat. 19, 1–50 (1991)
Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kurogi, S., Sawa, M., Tanaka, S. (2006). Competitive Associative Nets and Cross-Validation for Estimating Predictive Uncertainty on Regression Problems. In: Quiñonero-Candela, J., Dagan, I., Magnini, B., d’Alché-Buc, F. (eds) Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment. MLCW 2005. Lecture Notes in Computer Science(), vol 3944. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11736790_6
Download citation
DOI: https://doi.org/10.1007/11736790_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-33427-9
Online ISBN: 978-3-540-33428-6
eBook Packages: Computer ScienceComputer Science (R0)