Abstract
Autonomous machines are interesting for both researchers and regular people. Everyone wants to have a self control machine that do the work by itself and deal with all types of problems. Thus, supervised learning and classification became important for high-dimensional and complex problems. However, classification algorithms only deals with discrete classes while practical and real-life applications contain continuous labels. Although several statistical techniques in machine learning were applied to solve this problem but they act as a black box and their actions are difficult to justify. Covering algorithms (CA), however, is one type of inductive learning that can be used to build a simple and powerful repository. Nevertheless, current CA approaches that deal with continuous classes are bias, non-updatable, overspecialized and sensitive to noise, or time consuming. Consequently, this paper proposes a novel non-discretization algorithm that deal with numeric classes while predicting discrete actions. It is a new version of RULES family called RULES-3C that learns interactively and transfer experience through exploiting the properties of reinforcement learning. This paper will investigate and assess the performance of RULES-3C with different practical cases and algorithms. Friedman test is also applied to rank RULES-3C performance and measure its significance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Escalante-B, A.N., Wiskott, L.: How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis. Journal of Machine Learning Research 14, 3683–3719 (2013)
Asgharbeygi, N., Nejati, N., Langley, P., Arai, S.: Guiding Inference Through Relational Reinforcement Learning. In: Kramer, S., Pfahringer, B. (eds.) ILP 2005. LNCS (LNAI), vol. 3625, pp. 20–37. Springer, Heidelberg (2005)
Aksoy, M.S., Mathkour, H., Alasoos, B.A.: Performance Evaluation of RULES-3 Induction System for Data Mining. International Journal of Innovative Computing, Information and Control 6, 3339–3346 (2010)
Kotsiantis, S.B.: Supervised Machine Learning: A Review of Classification Techniques. Informatica 31, 249–268 (2007)
Kurgan, L.A., Cios, K.J., Dick, S.: Highly Scalable and Robust Rule Learner: Performance Evaluation and Comparison. IEEE Systems, Man, and Cybernetics—Part B Cybernetics 36, 32–53 (2006)
Pham, D., Bigot, S., Dimov, S.: RULES-5: a rule induction algorithm for classification problems involving continuous attributes. In: Institution of Mechanical Engineers, pp. 1273–1286 (2003)
ElGibreen, H., Aksoy, M.S.: Inductive Learning for Continuous Classes and the Effect of RULES Family. International Journal of Information and Education Technology 5, 564–570 (2014)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)
Demšar, J.: Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research 7, 1–30 (2006)
Moriarty, D.E., Schultz, A., Grefenstette, J.: Evolutionary Algorithms for Reinforcement Learning. Journal of Artificial Intelligence Research 11, 241–276 (1999)
Nissen, S.: Large Scale Reinforcement Learning using Q-SARSA(λ) and Cascading Neural Networks. Master Thesis, Department of Computer Science, University of Copenhagen, Denmark (2007)
Watkins, C.: Learning from Delayed Rewards. PhD Thesis, Cambridge University, Cambridge, England (1989)
Ertel, W.: Reinforcement Learning. In: Introduction to Artificial Intelligence, pp. 257–277. Springer (2011)
Garcia, F., Martin-Clouaire, R., Nguyen, G.L.: Generating Decision Rules By Reinforcement Learning For A Class Of Crop Management Problems. In: 3rd European Conference of the European Federation for Information Technology in Agriculture, Food and the Environment (EFITA’01). Montpellier (2001)
Lagoudakis, M.G., Parr, R.: Reinforcement Learning as Classification: Leveraging Modern Classifiers. In: Twentieth International Conference on Macine Learning (ICML-2003). Washington DC (2003)
Lanzi, P.L.: Learning classifier systems from a reinforcement learning perspective. Soft Computing - A Fusion of Foundations, Methodologies and Applications 6, 162–170 (2002)
Wiering, M.A., van Hasselt, H., Pietersma, A.D., Schomaker, L.: Reinforcement Learning Algorithms for solving Classification Problems. In: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL), pp. 91–96. Paris, France (2011)
Hwang, K.-S., Chen, Y.-J., Jiang, W.-C., Yang, T.-W.: Induced states in a decision tree constructed by Q-learning. Information Sciences 213, 39–49 (2012)
Tamee, K., Bull, L., Pinngern, O.: A Learning Classifier Systems Approach to Clustering (2006)
Likas, A.: A Reinforcement Learning approach to on-line clustering. Neural Computation 11, 1915–1932 (1999)
Barbakh, W., Fyfe, C.: Clustering with Reinforcement Learning. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 507–516. Springer, Heidelberg (2007)
ElGibreen, H., Aksoy, M.S.: Multi Model Transfer Learning with RULES Family. In: Perner, P. (ed.) MLDM 2013. LNCS, vol. 7988, pp. 42–56. Springer, Heidelberg (2013)
Wilson, A., Fern, A., Tadepalli, P.: Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach. In: Workshop on Unsupervised and Transfer Learning, pp. 217–227 (2012)
Konidaris, G., Scheidwasser, I., Barto, A.G.: Transfer in reinforcement learning via shared features. Journal of Machine Learning Research 13, 1333–1371 (2012)
Dzeroski, S., Cestnik, B., Petrovski, I.: Using the m-estimate in rule induction. Journal of Computing and Information Technology 1, 37–46 (1993)
Yang, Y., Webb, G.: Proportional k-Interval Discretization for Naive-Bayes Classifiers. In: Raedt, L., Flach, P. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 564–575. Springer, Heidelberg (2001)
Webmaster.Team. KEEL: A software tool to assess evolutionary algorithms for Data Mining problems including regression, classification, clustering, pattern mining and so on (2012). http://keel.es/
Alcalá-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., GarcÃa, S., Sánchez, L., Herrera, F.: KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. Journal of Multiple-Valued Logic and Soft Computing 17, 255–287 (2011)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, 1st edn. MIT Press, Cambridge, MA (1998)
Zahálka, J., Železný, F.: An experimental test of Occam’s razor in classification. Machine Learning, vol. 82, pp. 475–481, 2011/03/01(2011)
Addinsoft. XLSTAT (2014). http://www.xlstat.com/en/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
ElGibreen, H., Aksoy, M.S. (2015). Classifying Continuous Classes with Reinforcement Learning RULES. In: Nguyen, N., Trawiński, B., Kosala, R. (eds) Intelligent Information and Database Systems. ACIIDS 2015. Lecture Notes in Computer Science(), vol 9012. Springer, Cham. https://doi.org/10.1007/978-3-319-15705-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-15705-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-15704-7
Online ISBN: 978-3-319-15705-4
eBook Packages: Computer ScienceComputer Science (R0)