Skip to main content

Exploiting Hierarchical Domain Values for Bayesian Learning

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2637))

Included in the following conference series:

  • 1147 Accesses

Abstract

This paper proposes a framework for exploiting hierarchical structures of feature domain values in order to improve classification performance under Bayesian learning framework. Inspired by the statistical technique called shrinkage, we investigate the variances in the estimation of parameters for Bayesian learning. We develop two algorithms by maintaining a balance between precision and robustness to improve the estimation. We have evaluated our methods using two real-world data sets, namely, a weather data set and a yeast gene data set. The results demonstrate that our models benefit from exploring the hierarchical structures.

The work described in this paper was partially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Nos: CUHK 4385/99E and CUHK 4187/01E)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. D. Baker and A. K. McCallum, Distributional Clustering of Words for Text Classification, Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 96–102, 1998.

    Google Scholar 

  2. P. Domingos and M. Pazzani, Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier, Machine Learning 29, pp. 103–130, 1997.

    Article  MATH  Google Scholar 

  3. S. Dumis and H. Chen, Hierarchical Classification of Web Content, Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 256–263, 2000.

    Google Scholar 

  4. D. Freitag and A. McCallum. Information Extraction with HMMs and Shrinkage. Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction. 1999

    Google Scholar 

  5. W. James and C. Stein, Estimation with Quadratic Loss., Proceedings of the Fourth Berkeley Symposim on Mathematical Statistics and Probability 1, pp. 361–379, 1961.

    MathSciNet  Google Scholar 

  6. A. McCallum, R. Rosenfeld, T. Mitchell and A. Y. Ng, Improving Text Classification by Shrinkage in a Hierarchy of Classes, Proceedings of the International Conference on Machine Learning (ICML), pp. 359–367, 1998.

    Google Scholar 

  7. A. McCallum and K. Nigam, Text Classification by Bootstrapping with Keywords, EM and Shrinkage, ACL Workshop for Unsupervised Learning in Natural Language Processing, 1999

    Google Scholar 

  8. K. Qu, L. Nan, L. Yanmei and D. G. Payan, Multidimensional Data Integration and Relationship Inference, IEEE Intelligent Systems 17, pp. 21–27, 2002.

    Article  Google Scholar 

  9. E. Segal and D. Koller, Probabilistic Hierarchical Clustering for Biological Data, Annual Conference on Research in Computational Molecular Biology, pp. 273–280, 2002.

    Google Scholar 

  10. C. Stein, Inadmissibility of the Usual Estimator for the Mean of a Multivariate Normal Distribution, Proceedings of the Third Berkeley Symposim on Mathematical Statistics and Probability 1, pp. 197–206, 1955.

    Google Scholar 

  11. F. Takech and E. Suzuki, Finding an Optimal Gain-Ratio Subset-Split Test for a Set-Valued Attribute in Decision Tree Induction, Proceedings of the International Conference on Machine Learning (ICML), pp. 618–625, 2002

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Han, Y., Lam, W. (2003). Exploiting Hierarchical Domain Values for Bayesian Learning. In: Whang, KY., Jeon, J., Shim, K., Srivastava, J. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2003. Lecture Notes in Computer Science(), vol 2637. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36175-8_25

Download citation

  • DOI: https://doi.org/10.1007/3-540-36175-8_25

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-04760-5

  • Online ISBN: 978-3-540-36175-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics