Skip to main content

Enhancing Recursive Supervised Learning Using Clustering and Combinatorial Optimization (RSL-CC)

  • Chapter
Engineering Evolutionary Intelligent Systems

Part of the book series: Studies in Computational Intelligence ((SCI,volume 82))

  • 460 Accesses

Summary

The use of a team of weak learners to learn a dataset has been shown better than the use of one single strong learner. In fact, the idea is so successful that boosting, an algorithm combining several weak learners for supervised learning, has been considered to be one of the best off-the-shelf classifiers. However, some problems still remain, including determining the optimal number of weak learners and the overfitting of data. In an earlier work, we developed the RPHP algorithm which solves both these problems by using a combination of genetic algorithm, weak learner and pattern distributor. In this paper, we revise the global search component by replacing it with a cluster-based combinatorial optimization. Patterns are clustered according to the output space of the problem, i.e., natural clusters are formed based on patterns belonging to each class. A combinatorial optimization problem is therefore formed, which is solved using evolutionary algorithms. The evolutionary algorithms identify the “easy” and the “difficult” clusters in the system. The removal of the easy patterns then gives way to the focused learning of the more complicated patterns. The problem therefore becomes recursively simpler. Overfitting is overcome by using a set of validation patterns along with a pattern distributor. An algorithm is also proposed to use the pattern distributor to determine the optimal number of recursions and hence the optimal number of weak learners for the problem. Empirical studies show generally good performance when compared to other state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dayhoff JE, DeLeo JM (2001) Artificial neural networks: Opening the black box. Cancer 91(8):1615–1635

    Article  Google Scholar 

  2. Engelbrechet AP, Brits R (2002) Supervised training using an unsupervised approach to active learning. Neural Process Lett 15(3):247–260

    Article  Google Scholar 

  3. Fukunaga K (1990) Introduction to statistical pattern recognition. Academic, Boston

    MATH  Google Scholar 

  4. Gong DX, Ruan XG, Qiao JF (2004) A neuro computing model for real coded genetic algorithm with the minimal generation gap. Neural Comput Appl 13:221–228

    Article  Google Scholar 

  5. Guan SU, Li S (2002) Parallel growing and training of neural networks using output parallelism. IEEE Trans Neural Netw 13(3):542–550

    Article  Google Scholar 

  6. Guan SU, Ramanathan K (2004) Recursive percentage based hybrid pattern training. In: Proceedings of the IEEE conference on cybernetics and intelligent systems, pp 455–460

    Google Scholar 

  7. Guan SU, Neo TN, Bao C (2004) Task decomposition using pattern distributor. J Intell Syst 13(2):123–150

    Google Scholar 

  8. Guan SU, Qi Y, Tan SK, Li S (2005) Output partitioning of neural networks. Neurocomputing 68:38–53

    Article  Google Scholar 

  9. Guan SU, Ramanathan K, Iyer LR (2006) Multi learner based recursive training. In: Proceedings of the IEEE conference on cybernetics and intelligent systems (Accepted)

    Google Scholar 

  10. Guan SU, Zhu F (2004) Class decomposition for GA-based classifier agents – a pitt approach. IEEE Trans Syst Man Cybern B Cybern 34(1):381–392

    Article  Google Scholar 

  11. Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning: Data mining, inference, and prediction. Springer, New York

    MATH  Google Scholar 

  12. Haykins S (1999) Neural networks, a comprehensive foundation. Prentice Hall, Upper Saddle River, NJ

    Google Scholar 

  13. Holland JH (1973) Genetic algorithms and the optimal allocation of trials. SIAM J Comput 2(2):88–105

    Article  MATH  MathSciNet  Google Scholar 

  14. MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley symposium on mathematical statistics and probability, vol 1. University of California Press, Berkeley, pp 281–297

    Google Scholar 

  15. Jain AK, Murty MN, Flynn PJ (1999) Data clustering: A review, ACM Comput Surv 31(3):264–323

    Article  Google Scholar 

  16. Kohonen T (1997) Self organizing maps. Springer, Berlin

    MATH  Google Scholar 

  17. Lasarzyck CWG, Dittrich P, Banzhaf W (2004) Dynamic subset selection based on a fitness case topology. Evol Comput 12(4):223–242

    Article  Google Scholar 

  18. Lehtokangas M (1999) Modeling with constructive backpropagation. Neural Netw 12:707–714

    Article  Google Scholar 

  19. Lu BL, Ito K, Kita H, Nishikawa Y (1995) Parallel and modular multi-sieving neural network architecture for constructive learning. In: Proceedings of the 4th international conference on artificial neural networks, 409, pp 92–97

    Google Scholar 

  20. Satoh H, Yamamura M, Kobayashi S (1996) Minimal generation gap model for GAs considering both exploration and exploitation. In: Proceedings of 4th international conference on soft computing, Iizuka, pp 494–497

    Google Scholar 

  21. Schapire RE (1999) A brief introduction to boosting. In: Proceedings of the 16th international joint conference on artificial intelligence, pp 1–5

    Google Scholar 

  22. Wong MA, Lane T (1983) A kth nearest neighbor clustering procedure. J Roy Stat Soc B 45(3):362–368

    MATH  MathSciNet  Google Scholar 

  23. Zhang BT, Cho DY (1998) Genetic programming with active data selection. Lect Notes Comput Sci 1585:146–153

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Ramanathan, K., Guan, S.U. (2008). Enhancing Recursive Supervised Learning Using Clustering and Combinatorial Optimization (RSL-CC). In: Abraham, A., Grosan, C., Pedrycz, W. (eds) Engineering Evolutionary Intelligent Systems. Studies in Computational Intelligence, vol 82. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75396-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-75396-4_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-75395-7

  • Online ISBN: 978-3-540-75396-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics