Skip to main content

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 820 Accesses

Abstract

In this chapter, we present ETL committee, an ensemble method that uses ETL as a base learner. The ETL committee strategy relies on the use of training data manipulation to create an ensemble of ETL classifiers. ETL committee combines the main ideas of bagging and random subspaces. From bagging, we borrow the bootstrap sampling method. From random subspaces, we use the feature sampling idea. In the ETL committee training, we use ETL with template sampling, which provides an additional randomization step. As far as we know, this is the first study that uses transformation rule learning as the base learner for an ensemble method. This chapter is organized as follows. In Sect. 3.1, we explain the main idea behind ensemble methods. In Sect. 3.2, we detail the ETL committee training phase. In Sect. 3.3, we detail the classification phase. Finally, in Sect. 3.4, we present some related works.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: Ensemble diversity measures and their application to thinning. Inf. Fusion 6(1), 49–62 (2005)

    Article  Google Scholar 

  2. Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: A comparison of decision tree ensemble creation techniques. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 173–180 (2007). doi:10.1109/TPAMI.2007.2

    Article  Google Scholar 

  3. Biau, G., Devroye, L., Lugosi, G.: Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 9, 2015–2033 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MathSciNet  MATH  Google Scholar 

  5. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). doi:10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  6. Brown, G., Wyatt, J.L., Tiňo, P.: Managing diversity in regression ensembles. J. Mach. Learn. Res. 6, 1621–1650 (2005)

    MathSciNet  MATH  Google Scholar 

  7. Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS’00: Proceedings of the First International Workshop on Multiple Classifier Systems, pp. 1–15. Springer, London (2000)

    Google Scholar 

  8. Florian, R., Ittycheriah, A., Jing, H., Zhang, T.: Named entity recognition through classifier combination. In: Daelemans, W., Osborne, M. (eds.) Proceedings of CoNLL-2003, pp. 168–171. Edmonton, Canada (2003)

    Google Scholar 

  9. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  10. García-Pedrajas, N., Ortiz-Boyer, D.: Boosting random subspace method. Neural Netw. 21(9), 1344–1362 (2008). doi:10.1016/j.neunet.2007.12.046

    Article  Google Scholar 

  11. Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998). doi:10.1109/34.709601

    Article  Google Scholar 

  12. Oza, N.C., Tumer, K.: Classifier ensembles: select real-world applications. Inf. Fusion 9(1), 4–20 (2008). doi:10.1016/j.inffus.2007.07.002

    Article  Google Scholar 

  13. Panov, P., Dzeroski, S.: Combining bagging and random subspaces to create better ensembles. In: Berthold, M.R., ShaweTaylor, J., Lavrac, N. (eds.) 7th International Symposium on Intelligent Data Analysis, pp. 118–129. Ljubljana, Slovenia (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2012 The Author(s)

About this chapter

Cite this chapter

dos Santos, C.N., Milidiú, R.L. (2012). ETL Committee. In: Entropy Guided Transformation Learning: Algorithms and Applications. SpringerBriefs in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-2978-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-2978-3_3

  • Published:

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-2977-6

  • Online ISBN: 978-1-4471-2978-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics