Skip to main content

Bayesian Feature Construction

  • Conference paper
Advances in Artificial Intelligence (SETN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3955))

Included in the following conference series:

  • 1742 Accesses

Abstract

The present paper discusses the issue of enhancing classification performance by means other than improving the ability of certain Machine Learning algorithms to construct a precise classification model. On the contrary, we approach this significant problem from the scope of an extended coding of training data. More specifically, our method attempts to generate more features in order to reveal the hidden aspects of the domain, modeled by the available training examples. We propose a novel feature construction algorithm, based on the ability of Bayesian networks to represent the conditional independence assumptions of a set of features, thus projecting relational attributes which are not always obvious to a classifier when presented in their original format. The augmented set of features results in a significant increase in terms of classification performance, a fact that is depicted to a plethora of machine learning domains (i.e. data sets from the UCI ML repository and the Artificial Intelligence group) using a variety of classifiers, based on different theoretical backgrounds.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D., Kibler, D., Albert, M.K.: Instance based learning algorithms. Machine Learning 6(1), 37–66 (1991)

    Google Scholar 

  2. Cooper, G., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992)

    MATH  Google Scholar 

  3. Jensen, R.: An Introduction to Bayesian Networks. UCL Press, London (1996)

    Google Scholar 

  4. John, G., Langley, P.: Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, pp. 338–345 (1995)

    Google Scholar 

  5. Kavallieratou, E.: Σύστ ημ α Aυτόμα τη ς Eπε ξε ργ ασίας Eγγ ράφο υ κα ι Aνα γνώρι ση ς Xει ρόγρ αφ ων Xαρ ακ τήρω ν Συν εχόμε νη ς Γρα φής, Aνε ξάρτ ητ ο Συγ γρ αφέα, PhD Thesis (2000).

    Google Scholar 

  6. Kohavi, R., Dan, S.: Feature subset selection using the wrapper model: Overfitting and dynamic search space topology. In: Fayyad, U.M., Uthurusamy, R. (eds.) First International Conference on Knowledge, Discovery and Data Mining (1995)

    Google Scholar 

  7. Lam, W., Bacchus, R.: Learning Bayesian belief networks: An approach based on the MDL principle. Computational Intelligence 10(4), 269–293 (1994)

    Article  Google Scholar 

  8. Markovich, S., Rosenstein, D.: Feature Generation Using General Constructor Functions. Machine Learning 49(1), 59–98 (2002)

    Article  MATH  Google Scholar 

  9. Murphy, P.M., Aha, D.W.: UCI repository of machine learning databases. [Machine-readable data repository]. University of California, Department of Information and Computer Science, Irvine, CA (1993)

    Google Scholar 

  10. Murphy, P.M., Pazzani, M.J.: Exploring the decision forest: An empirical investigation of Occam’s razor in decision tree induction. Journal of Artificial Intelligence Research 1, 257–275 (1994)

    MATH  Google Scholar 

  11. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA (1988)

    MATH  Google Scholar 

  12. Quinlan, J.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1980); Reiter, R.: A logic for default reasoning. Artificial Intelligence 13(1-2), 81–132 (1993)

    Google Scholar 

  13. Salzberg, S.: Improving classification methods via feature selection. Machine Learning 99 (1993)

    Google Scholar 

  14. Tasikas, A.: Aνα γνώρι ση Oνο μάτω ν Oντ οτήτω ν σε Kείμε ναας λλ ην ικής Γλώσσ ας απ οκ λε ισ τι κά με Mηχ αν ική Mάθη σ, Diploma Thesis, University of Patras (2002)

    Google Scholar 

  15. Zervas, P., Maragoudakis, M., Fakotakis, N., Kokkinakis, G.: Learning to predict Pitch Accents using Bayesian Belief Networks for Greek Language. In: LREC 2004, 4th International Conference on Language Resources and Evaluation, Lisbon, Portugal, pp. 2139–2142 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Maragoudakis, M., Fakotakis, N. (2006). Bayesian Feature Construction. In: Antoniou, G., Potamias, G., Spyropoulos, C., Plexousakis, D. (eds) Advances in Artificial Intelligence. SETN 2006. Lecture Notes in Computer Science(), vol 3955. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11752912_25

Download citation

  • DOI: https://doi.org/10.1007/11752912_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34117-8

  • Online ISBN: 978-3-540-34118-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics