Advertisement

Ensemble Selection for SuperParent-One-Dependence Estimators

  • Ying Yang
  • Kevin Korb
  • Kai Ming Ting
  • Geoffrey I. Webb
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3809)

Abstract

SuperParent-One-Dependence Estimators (SPODEs) loosen Naive-Bayes’ attribute independence assumption by allowing each attribute to depend on a common single attribute (superparent) in addition to the class. An ensemble of SPODEs is able to achieve high classification accuracy with modest computational cost. This paper investigates how to select SPODEs for ensembling. Various popular model selection strategies are presented. Their learning efficacy and efficiency are theoretically analyzed and empirically verified. Accordingly, guidelines are investigated for choosing between selection criteria in differing contexts.

Keywords

Bayesian Network Training Instance Minimum Description Length Ensemble Selection Minimum Message Length 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Akaike, H.: A new look at the statistical model identification. IEEE Transactions on Automatic Control AC-19, 716–723 (1974)Google Scholar
  2. Blake, C., Merz, C.J.: UCI repository of machine learning databases [Machine-readable data repository]. University of California, Department of Information and Computer Science, Irvine, CA (2004)Google Scholar
  3. Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29V(2-3), 131–163 (1997)CrossRefGoogle Scholar
  4. Keogh, E., Pazzani, M.: Learning augmented Bayesian classifiers: A comparison of distribution-based and classification-based approaches. In: Proceedings of the International Workshop on Artificial Intelligence and Statistics, pp. 225–230 (1999)Google Scholar
  5. Korb, K., Nicholson, A.: Bayesian Artificial Intelligence. Chapman & Hall/CRC, Boca Raton (2004)zbMATHGoogle Scholar
  6. Sahami, M.: Learning limited dependence Bayesian classifiers. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 334–338. AAAI Press, Menlo Park (1996)Google Scholar
  7. Schwarz, G.: Estimating the dimension of a model. Annals of Statistics 6, 461–465 (1978)zbMATHCrossRefMathSciNetGoogle Scholar
  8. Suzuki, J.: Learning Bayesian belief networks based on the MDL principle: an efficient algorithm using the branch and bound technique. In: Proceedings of the International Conference on Machine Learning, pp. 463–470 (1996)Google Scholar
  9. Webb, G.I., Boughton, J., Wang, Z.: Not so naive Bayes: Averaged one-dependence estimators. Machine Learning 58, 5–24 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Ying Yang
    • 1
  • Kevin Korb
    • 1
  • Kai Ming Ting
    • 1
  • Geoffrey I. Webb
    • 1
  1. 1.School of Computer Science and Software Engineering, Faculty of Information TechnologyMonash UniversityAustralia

Personalised recommendations