Abstract
In this chapter we present some general entropy guided transformation learning configurations used for natural language processing tasks. We use the same configuration when applying ETL for the four examined tasks. Hence, the ETL modeling phase is performed with little effort. Moreover, the use of a common parameter setting can also provide some insight about the robustness of the learning algorithm. This chapter is organized as follows. In Sect. 4.1, we show how to model NLP tasks as classification tasks. In Sect. 4.2, we present the basic ETL parameter setting. In Sect. 4.3, we present the ETL committee parameter setting. In Sect. 4.4, we detail the performance measures used to assess the system performances on NLP tasks. Finally, in Sect. 4.5, we describe the software and hardware used in our experiments.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ngai, G., Florian, R.: Transformation-based learning in the fast lane. In: Proceedings of North Americal ACL, pp. 40–47 (2001)
Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco (1993)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2012 The Author(s)
About this chapter
Cite this chapter
dos Santos, C.N., Milidiú, R.L. (2012). General ETL Modeling for NLP Tasks. In: Entropy Guided Transformation Learning: Algorithms and Applications. SpringerBriefs in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-2978-3_4
Download citation
DOI: https://doi.org/10.1007/978-1-4471-2978-3_4
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-2977-6
Online ISBN: 978-1-4471-2978-3
eBook Packages: Computer ScienceComputer Science (R0)