SPAARC: A Fast Decision Tree Algorithm
Decision trees are a popular method of data-mining and knowledge discovery, capable of extracting hidden information from datasets consisting of both nominal and numerical attributes. However, their need to test the suitability of every attribute at every tree node, in addition to testing every possible split-point for every numerical attribute can be expensive computationally, particularly for datasets with high dimensionality. This paper proposes a method for speeding up the decision tree induction process called SPAARC, consisting of two components to address these issues – sampling of the numeric attribute tree-node split-points and dynamically adjusting the node attribute selection space. Further, these methods can be applied to almost any decision tree algorithm. To confirm its validity, SPAARC has been tested and compared against an implementation of the CART algorithm using 18 freely-available datasets from the UCI data repository. Results from this testing indicate the two components of SPAARC combined have minimal effect on decision tree classification accuracy yet reduce model build times by as much as 69%.
KeywordsDecision tree Processing speed Classification accuracy Node Attribute Sampling
This research is supported by an Australian Government Research Training Program (RTP) scholarship.
- 1.Islam, M.Z., Furner, M., Siers, M.J.: WaterDM: a knowledge discovery and decision support tool for efficient dam management (2016)Google Scholar
- 2.Dangare, C.S., Apte, S.S.: Improved study of heart disease prediction system using data mining classification techniques. Int. J. Comput. Appl. 47(10), 44–48 (2012)Google Scholar
- 5.Quinlan, J.R.: C4.5: Programs for Machine Learning. Elsevier, New York (2014)Google Scholar
- 6.Nath, S.: ACE: exploiting correlation for energy-efficient and continuous context sensing. In: Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services ACM (2012)Google Scholar
- 7.Srinivasan, V., Moghaddam, S., Mukherji, A., Rachuri, K.K., Xu, C., Tapia, E.M.: MobileMiner: mining your frequent patterns on your phone. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing ACM (2014)Google Scholar
- 9.Maurer, U., Smailagic, A., Siewiorek, D.P., Deisher, M.: Activity recognition and monitoring using multiple sensors on different body positions. In: International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2006) IEEE (2006)Google Scholar
- 10.Darrow, B.: Amazon just made a huge change to its cloud pricing. http://fortune.com/2017/09/18/amazon-cloud-pricing-second/ Accessed 30 June 2018
- 12.Ranka, S., Singh, V.: CLOUDS: a decision tree classifier for large datasets. In: Proceedings of the 4th Knowledge Discovery and Data Mining Conference (1998)Google Scholar
- 15.Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository. https://archive.ics.uci.edu/ml/datasets.html Accessed 12 Aug 2018
- 16.Buntine, W., Niblett, T.: A further comparison of splitting rules for decision-tree induction. Mach. Learn. 8(1), 75–85 (1992)Google Scholar