Skip to main content

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 69))

  • 1777 Accesses

Abstract

We have reached the end of the book. Have we got any answers to the question we raised in the introductory chapter? Having presented the several options for classification – SVMs alone, asingle EAs and hybridization at two stages of learning – what choice proved to be more advantageous, taking into consideration prediction accuracy, comprehensibility, simplicity, flexibility and runtime?

Let us summarize the conclusions we can draw after experimenting with each technique:

  • SVMs (Chap. 2) hold the key to high accuracy and good runtime. If good accuracy and small runtime are all that matter for the user, SVMs are the best choice. If interest also concerns an intuitive idea of how a prototype for a class looks like, the user should see other options too, as SVMs put forward insufficient explanations [Huysmans et al, 2006] for how the predicted outcome was reached. That which cannot be understood, cannot be fully trusted.

  • GC (Chap. 4) and CC (Chap. 5) evolve class prototypes holding attribute thresholds that must be simultaneously reached in order to label a sample with a certain outcome. Evaluation requires good accuracy on the training data. Diversity for prototypes of distinct classes is preserved either through radii separating subpopulations or through the maintenance of multiple populations, each connected to one class. Understandability is increased, as thresholds are provided for the attributes that differentiate between outcomes, but accuracy cannot surpass that of SVMs. Feature selection can be directly added to the evolutionary process through a concurrent HC. Not all problem variables are thus further encoded into the structure of an individual, which leads to genome length reduction and enhancement of comprehensibility.

  • These findings trigger the idea of combining the good accuracy of SVMs with the comprehensible class prototypes of EAs. ESVMs (Chap. 6) geometrically discriminate between training samples as SVMs, formulate the SVM primal optimization problem and solve it by evolving hyperplane coefficients through EAs. This is in fact EAs doing the optimization inside SVMs. Accuracy is comparable to SVMs, the optimization engine is simpler and the kernel choice unconstrained. As for comprehensibility, ESVMs are just more direct than SVMs in providing the weights. Runtime is increased by addressing the entire data set each time the evaluation of an individual takes place; chunking partly resolves it. Nevertheless, another advantage over SVMs is that ESVMs can be endowed with a GA feature selector, performed simultaneously with weight evolution. This extracts the more informative attributes, as a more understandable result of the classification label. The end-user can therefore view what are the indicators that influence the outcome.

  • A sequential SVM-CC hybridization (Chap. 7) should be more comprising of the advantages of both. SVMs relabel the training data and the EA can this time address a noise-free collection to extract prototypes from. A shorter highly informative data set can be also obtained by referring only support vectors following the SVMs. Attribute thresholds are again generated by CC, as the better performing of the two presented EA approaches to classification. A HC is included again for synchronized feature selection. A pyramid of the importance of each problem variable for the triggered outcome can be additionally perceived. Finally, feature selection at the level of obtained prototypes presents a comprehensible format and image of those significant attributes for each class and the thresholds by which they differentiate between outcomes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catalin Stoean .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Stoean, C., Stoean, R. (2014). Final Remarks. In: Support Vector Machines and Evolutionary Algorithms for Classification. Intelligent Systems Reference Library, vol 69. Springer, Cham. https://doi.org/10.1007/978-3-319-06941-8_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-06941-8_8

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-06940-1

  • Online ISBN: 978-3-319-06941-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics