Abstract
Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.
Similar content being viewed by others
References
Biba, M., Esposito, F., Ferilli, S., Mauro, N.D., Basile, T. (2007). Unsupervised discretization using kernel density estimation. In: Proceedings of Twentieth International Joint Conference on Artificial Intelligence (IJCAI) (pp. 696–701).
Bondu, A., Boulle, M., Lemaire, V., Loiseau, S., Duval, B. (2008). A Non-parametric semi-supervised discretization method. In: Proceedings of Eighth IEEE International Conference on Data Mining (ICDM) (pp. 53–62).
Boulle, M. (2004). Khiops: a statistical discretization method of continuous attributes. Machine Learning, 55, 53–69.
Boulle, M. (2006). MODL: a bayes optimal discretization method for continuous attributes. Machine Learning, 65, 131–165.
Ching, J.Y., Wong, A.K.C., Chan, K.C.C. (1995). Class-dependent discretization for inductive learning from continuous and mixed-mode data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(7), 641–651.
Cios, K.J., & Kurgan, L.A. (2007). CLIP4: hybrid inductive machine learning algorithm that generates inequality rules. Information Sciences, 177(17), 3592–3612.
Cover, T.M., & Thomas, J.A. (2006). Elements of information thoery (2nd ed.). New York: Wiley.
Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.
Dougherty, J., Kohavi, R., Sahami M. (1995). Supervised and unsupervised discretization of continuous feature. In: Proceedings of 12th International conference of Machine learning (pp. 194–202).
Fayyad, U., & Irani, K. (1993). Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of thirteenth international joint conference on artificial intelligence (pp. 1022–1027). San Mateo, CA: Morgan Kaufmann.
Hand, D., Mannila, H., Smyth, P. (2001). Principles of data mining. MIT Press.
Hansen, M.H., & Yu, B. (2001). Model selection and the principle of minimum description length. Journal of the American Statistical Association, 96(545), 746–774.
Hettich, S., & Bay, S.D. (1999). The UCI KDD Archive [DB/OL]. http://kdd.ics.uci.edu/. Accessed 12 Aug 2010.
Jin, R.M., Breitbart, Y., Muoh, C. (2007). Data discretization unification. In: Proceedings of seventh IEEE International Conference on Data Mining (ICDM Best Paper) (pp. 183–192).
Jin, Y.W., & Qu, W.Y. (2009). Multi-dimension multi-objective fuzzy optimum dynamic programming method with complicated information based on a maximal-sum-rule of decision sequence priority. In: Eighth IEEE international conference on embedded computing; IEEE international conference on scalable computing and communications (pp. 656–660). Dalian, China.
Kerber, R. (1992). ChiMerge: discretization of numeric attributes. In: Proceedings of ninth national conference on artificial intelligence (pp. 123–128). AAAI Press.
Kurgan, L.A., & Cios, K.J. (2004). CAIM discretization algorithm. IEEE Transactions on Knowledge and Data Engineering, 16(2), 145–153.
Ling, C.X., & Zhang, H.J. (2002). The representational power of discrete bayesian networks. Journal of Machine Learning Research, 3, 709–721.
Liu, L.L., Wong, A.K.C., Wang, Y. (2004). A global optimal algorithm for class-dependent discretization of continuous data. Intelligent Data Analysis, 8(2), 151–170.
Liu, H., Hussain, F., Tan, C.L., Dash, M. (2002). Discretization: an enabling technique. Journal of Data Mining and Knowledge Discovery, 6(4), 393–423.
Liu, H., & Setiono, R. (1997). Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering, 9(4), 642–645.
Mahady, H., Muhammad, A.C. , Qu, W.Y., Lin, X.M. (2010). Efficient algorithms to monitor continuous constrained k nearest neighbor queries. In: Data base systems for advanced applications (pp. 233–249). Tsukuba, Japan.
Mussard, S., Seyte, F., Terraza, M. (2003). Decomposition of Gini and the generalized entropy inequality measures. Economic Bulletin, 4(7), 1–6.
Pawlak, Z. (1982). Rough sets. International Journal of Computer and Information Sciences, 11(5), 341–356.
Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.
Quinlan, J.R. (1993). C4.5: Programs for machine learning. San Mateo, California: Morgan Kaufmann.
Roweis, S.T., & Saul, L.K. (2000). Science. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 290(5500), 2323–2326.
Schmidberger, G., & Frank, E. (2005). Unsupervised discretization using tree-based density estimation. In: Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (pp. 240–251).
Su, C.T., & Hsu, J.H. (2005). An extended Chi2 algorithm for discretization of real value attributes. IEEE Transactions on Knowledge and Data Engineering, 17(3), 437–441.
Tay, E.H., & Shen, L. (2002). A modified Chi2 algorithm for discretization. IEEE Transactions on Knowledge and Data Engineering, 14(3), 666–670.
Tsai, C.J., Lee, C.I., Yang, W.P. (2008). A discretization algorithm based on class-attribute contingency coefficient. Information Sciences, 178, pp. 714–731.
Wang, H.X., & Zaniolo, C. (2000). CMP: a fast decision tree classifier using multivariate predictions. In: 16th International Conference on Data Engineering (ICDE00) (pp. 449–460).
Weka 3 Data mining software in Java (2007). http://www.cs.waikato.ac.nz/ml/weka. Accessed 26 Nov 2010.
Witten, I.H., & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with java implementations. San Francisco, CA: Morgan Kaufmann.
Zar, J.H. (1998). Biostatistical analysis (4th ed.). Englewood Clifs, New Jersey: Prentice Hall.
Ziarko, W. (1993). Variable precision rough set model. Journal of Computer and System Science, 46, 39–59.
Acknowledgements
This work is supported by NSFC under Grant nos. of 60973115, 60973117, 61173160, 61173162 and 61173165, and New Century Excellent Talents in University (NCET) of Ministry of Education of China.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
In this section, we show the monotonicity of f(β) and H β (R i ) with regard to β.
Theorem 1
f(β) and H β (R i ) are the monotonous decreasing functions of β in the interval (0,1].
Proof
Let β 1 and β 2 are two values in the interval (0,1], and β 1 < β 2. For f(β) in (1), we have
-
\(\because ~ 0<\beta_1< \beta_2\leq 1\)
-
\(\therefore \beta_2-\beta_1>0,~~~~ \beta_1\beta_2>0, ~~~~\beta_1\big(\frac{1}{N}\big)^{\beta_2}-\beta_2\big(\frac{1}{N}\big)^{\beta_1}<0\)
-
\(\because N \gg 1\)
-
\(\therefore~ \beta_2-\beta_1 > \mid \beta_1\big(\frac{1}{N}\big)^{\beta_2}-\beta_2\big(\frac{1}{N}\big)^{\beta_1} \mid \)
-
\(\therefore f(\beta_1)>f(\beta_2)\)
Therefore, f(β) increases with the reduction of β in the interval (0,1]. Similarly, \(H_{\beta_1}(R_{i})>H_{\beta_2}(R_{i})\). Therefore, H β (R i ) is also monotone decreasing with regard to β in the interval (0,1].□
Theorem 2
\(f(\beta)\in \big[1-\frac{1}{N}, \ln N \big]\) , and H 1(R i ) ≤ H β (R i ) ≤ logS.
Proof
According to Theorem 1, f(β) achieves a minimum value when β = 1 and achieves a maximum value when β→0. Then, we have
Similarly,
where H(R i ) is Shannon’s entropy (Cover and Thomas 2006) of interval R i . According to the extremum property of entropy, H(R i ) ≤ logS. Therefore, the theorem is proven.□
Rights and permissions
About this article
Cite this article
Sang, Y., Jin, Y., Li, K. et al. UniDis: a universal discretization technique. J Intell Inf Syst 40, 327–348 (2013). https://doi.org/10.1007/s10844-012-0228-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10844-012-0228-1