Advertisement

SPAARC: A Fast Decision Tree Algorithm

  • Darren YatesEmail author
  • Md Zahidul Islam
  • Junbin Gao
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 996)

Abstract

Decision trees are a popular method of data-mining and knowledge discovery, capable of extracting hidden information from datasets consisting of both nominal and numerical attributes. However, their need to test the suitability of every attribute at every tree node, in addition to testing every possible split-point for every numerical attribute can be expensive computationally, particularly for datasets with high dimensionality. This paper proposes a method for speeding up the decision tree induction process called SPAARC, consisting of two components to address these issues – sampling of the numeric attribute tree-node split-points and dynamically adjusting the node attribute selection space. Further, these methods can be applied to almost any decision tree algorithm. To confirm its validity, SPAARC has been tested and compared against an implementation of the CART algorithm using 18 freely-available datasets from the UCI data repository. Results from this testing indicate the two components of SPAARC combined have minimal effect on decision tree classification accuracy yet reduce model build times by as much as 69%.

Keywords

Decision tree Processing speed Classification accuracy Node Attribute Sampling 

Notes

Acknowledgements

This research is supported by an Australian Government Research Training Program (RTP) scholarship.

References

  1. 1.
    Islam, M.Z., Furner, M., Siers, M.J.: WaterDM: a knowledge discovery and decision support tool for efficient dam management (2016)Google Scholar
  2. 2.
    Dangare, C.S., Apte, S.S.: Improved study of heart disease prediction system using data mining classification techniques. Int. J. Comput. Appl. 47(10), 44–48 (2012)Google Scholar
  3. 3.
    Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques. Elsevier, New York (2011)zbMATHGoogle Scholar
  4. 4.
    Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984)zbMATHGoogle Scholar
  5. 5.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Elsevier, New York (2014)Google Scholar
  6. 6.
    Nath, S.: ACE: exploiting correlation for energy-efficient and continuous context sensing. In: Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services ACM (2012)Google Scholar
  7. 7.
    Srinivasan, V., Moghaddam, S., Mukherji, A., Rachuri, K.K., Xu, C., Tapia, E.M.: MobileMiner: mining your frequent patterns on your phone. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing ACM (2014)Google Scholar
  8. 8.
    Hinwood, A., Preston, P., Suaning, G., Lovell, N.: Bank note recognition for the vision impaired. Australas. Phys. Eng. Sci. Med. 29(2), 229 (2006)CrossRefGoogle Scholar
  9. 9.
    Maurer, U., Smailagic, A., Siewiorek, D.P., Deisher, M.: Activity recognition and monitoring using multiple sensors on different body positions. In: International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2006) IEEE (2006)Google Scholar
  10. 10.
    Darrow, B.: Amazon just made a huge change to its cloud pricing. http://fortune.com/2017/09/18/amazon-cloud-pricing-second/ Accessed 30 June 2018
  11. 11.
    Fayyad, U.M., Irani, K.B.: On the handling of continuous-valued attributes in decision tree generation. Mach. Learn. 8(1), 87–102 (1992)zbMATHGoogle Scholar
  12. 12.
    Ranka, S., Singh, V.: CLOUDS: a decision tree classifier for large datasets. In: Proceedings of the 4th Knowledge Discovery and Data Mining Conference (1998)Google Scholar
  13. 13.
    Chandrashekar, G., Sahin, F.: A survey on feature selection methods. Comput. Electr. Eng. 40(1), 16–28 (2014)CrossRefGoogle Scholar
  14. 14.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3(Mar), 1157–1182 (2003)zbMATHGoogle Scholar
  15. 15.
    Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository. https://archive.ics.uci.edu/ml/datasets.html Accessed 12 Aug 2018
  16. 16.
    Buntine, W., Niblett, T.: A further comparison of splitting rules for decision-tree induction. Mach. Learn. 8(1), 75–85 (1992)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Charles Sturt UniversityBathurstAustralia
  2. 2.University of Sydney Business School, The University of SydneySydneyAustralia

Personalised recommendations