HEAD-DT: Experimental Analysis

  • Rodrigo C. BarrosEmail author
  • André C. P. L. F. de Carvalho
  • Alex A. Freitas
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)


In this chapter, we present several empirical analyses that assess the performance of HEAD-DT in different scenarios. We divide these analyses into two sets of experiments, according to the meta-training strategy employed for automatically designing the decision-tree algorithms. As mentioned in Chap.  4, HEAD-DT can operate in two distinct frameworks: (i) evolving a decision-tree induction algorithm tailored to one specific data set (specific framework); or (ii) evolving a decision-tree induction algorithm from multiple data sets (general framework). The specific framework provides data from a single data set to HEAD-DT for both algorithm design (evolution) and performance assessment. The experiments conducted for this scenario (see Sect. 5.1) make use of public data sets that do not share a common application domain. In the general framework, distinct data sets are used for algorithm design and performance assessment. In this scenario (see Sect. 5.2), we conduct two types of experiments, namely the homogeneous approach and the heterogeneous approach. In the homogeneous approach, we analyse whether automatically designing a decision-tree algorithm for a particular domain provides good results. More specifically, the data sets that feed HEAD-DT during evolution, and also those employed for performance assessment, share a common application domain. In the heterogeneous approach, we investigate whether HEAD-DT is capable of generating an algorithm that performs well across a variety of different data sets, regardless of their particular characteristics or application domain. We also discuss about the theoretic and empirical time complexity of HEAD-DT in Sect. 5.3, and we make a brief discussion on the cost-effectiveness of automated algorithm design in Sect. 5.4. We present examples of algorithms which were automatically designed by HEAD-DT in Sect. 5.5. We conclude the experimental analysis by empirically verifying in Sect. 5.6 whether the genetic search is worthwhile.


Experimental analysis Specific framework General framework Cost-effectiveness of automatically-designed algorithms 


  1. 1.
    R.C. Barros et al., Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data, in BMC Bioinformatics 13 (2012)Google Scholar
  2. 2.
    R.C. Barros et al., Towards the automatic design of decision tree induction algorithms, in 13th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO 2011). pp. 567–574 (2011)Google Scholar
  3. 3.
    M.P. Basgalupp et al., Software effort prediction: a hyper-heuristic decision-tree based approach, in 28th Annual ACM Symposium on Applied Computing. pp. 1109–1116 (2013)Google Scholar
  4. 4.
    L. Breiman et al., Classification and Regression Trees (Wadsworth, Belmont, 1984)zbMATHGoogle Scholar
  5. 5.
    B. Chandra, R. Kothari, P. Paul, A new node splitting measure for decision tree construction. Pattern Recognit. 43(8), 2725–2731 (2010)CrossRefzbMATHGoogle Scholar
  6. 6.
    B. Chandra, P.P. Varghese, Moving towards efficient decision tree construction. Inf. Sci. 179(8), 1059–1069 (2009)CrossRefzbMATHGoogle Scholar
  7. 7.
    J. Demšar, Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006). ISSN: 1532–4435zbMATHMathSciNetGoogle Scholar
  8. 8.
    A. Frank, A. Asuncion, UCI Machine Learning Repository (2010)Google Scholar
  9. 9.
    R. Iman, J. Davenport, Approximations of the critical region of the Friedman statistic, in Communications in Statistics, pp. 571–595 (1980)Google Scholar
  10. 10.
    S. Monti et al., Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. Mach. Learn. 52(1–2), 91–118 (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    J.R. Quinlan, C4.5: Programs for Machine Learning (Morgan Kaufmann, San Francisco, 1993). ISBN: 1-55860-238-0Google Scholar
  12. 12.
    M. Souto et al., Clustering cancer gene expression data: a comparative study. BMC Bioinform. 9(1), 497 (2008)CrossRefGoogle Scholar
  13. 13.
    F. Wilcoxon, Individual comparisons by ranking methods. Biometrics 1, 80–83 (1945)CrossRefGoogle Scholar
  14. 14.
    I.H. Witten, E. Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations (Morgan Kaufmann, San Francisco, 1999). ISBN: 1558605525Google Scholar

Copyright information

© The Author(s) 2015

Authors and Affiliations

  • Rodrigo C. Barros
    • 1
    Email author
  • André C. P. L. F. de Carvalho
    • 2
  • Alex A. Freitas
    • 3
  1. 1.Faculdade de InformáticaPontifícia Universidade Católica do Rio Grande do SulPorto AlegreBrazil
  2. 2.Instituto de Ciências Matemáticas e de ComputaçãoUniversidade de São PauloSão CarlosBrazil
  3. 3.School of ComputingUniversity of KentCanterburyUK

Personalised recommendations