Learning Rules from Distributed Data
- 615 Downloads
In this paper a concern about the accuracy (as a function of parallelism) of a certain class of distributed learning algorithms is raised, and one proposed improvement is illustrated. We focus on learning a single model from a set of disjoint data sets, which are distributed across a set of computers. The model is a set of rules. The distributed data sets may be disjoint for any of several reasons. In our approach, the first step is to construct a rule set (model) for each of the original disjoint data sets. Then rule sets are merged until an eventual final rule set is obtained which models the aggregate data. We show that this approach compares to directly creating a rule set from the aggregate data and promises faster learning. Accuracy can drop off as the degree of parallelism increases. However, an approach has been developed to extend the degree of parallelism achieved before this problem takes over.
Unable to display preview. Download preview PDF.
- 1.G. Williams, Inducing and Combining Multiple Decision Trees. PhD thesis, Australian National University, Canberra, Australia, 1990.Google Scholar
- 2.F. Provost and D. Hennessy, “Scaling up: Distributed machine learning with cooperation,” in Proceedings of AAAI’96, pp. 74–79, 1996.Google Scholar
- 3.J. Quinlan, C4.5: Programs for Machine Learning. Morgan Kaufmann, 1992. San Mateo, CA.Google Scholar
- 5.S. Clearwater, T. Cheng, H. Hirsh, and B. Buchanan, “Incremental batch learning,” in Proceedings of the Sixth Int. Workshop on Machine Learning, pp. 366–370, 1989.Google Scholar
- 6.W. Cohen, “Fast effective rule induction,” in Proceedings of the 12th Conference on Machine Learning, 1995.Google Scholar
- 7.R. Kufrin, “Generating C4.5 production rules in parallel,” in Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), pp. 565–570, July 1997.Google Scholar
- 8.J. Quinlan, “Generating production rules from decision trees,” in Proceedings of IJCAI-87, pp. 304–307, 1987.Google Scholar
- 9.F. Provost and D. Hennessy, “Distributed machine learning: Scaling up with coarse-grained parallelism,” in Proceedings of the Second International Conference on Intelligent Systems for Molecular Biology, 1994.Google Scholar
- 10.C. Merz and P. Murphy, UCI Repository of Machine Learning Databases. Univ. of CA., Dept. of CIS, Irvine, CA. http://www.ics.uci.edu/~mlearn/MLRepository.html.
- 11.L. Hall, N. Chawla, and K. Bowyer, “Decision tree learning on very large data sets,” in International Conference on Systems, Man and Cybernetics, pp. 2579–2584, Oct 1998.Google Scholar
- 12.R. Fisher, “The use of multiple measurements in taxonomic problems,” Ann. Eugenics, vol. 7, 1936.Google Scholar
- 14.P. Chan and S. Stolfo, “Scaling learning by meta-learning over disjoint and partially replicated data,” in Proceedings of the Florida Artificial Intelligence Society, 1996.Google Scholar
- 15.S. Stolfo, A. Prodromidis, S. Tselepis, W. Lee, D. Fan, and P. Chan, “JAM: Java agents for meta-learning over distributed databases,” in Proc. KDD-97, 1997.Google Scholar
- 16.P. K. Chan and S. J. Stolfo, “Toward scalable learning with non-uniform class and cost distributions: A case study in credit card fraud detection,” in Proc. KDD-98, 1998.Google Scholar