Abstract
As a major type of categorical data, ordinal data are those with the attributes whose possible values (also called categories interchangeably) are naturally ordered. As far as we know, all the existing distance metrics proposed for categorical data do not take the underlying order information into account during the distance measurement. This will make the produced distance incorrect and will further influence the results of ordinal data clustering. We therefore propose a specially designed distance metric, which can exploit the order information embedded in the ordered categories for distance measurement. It quantifies the distance between two ordinal categories by accumulating the sub-entropies of all the categories ordered between them. Since the proposed distance metric takes the order information into account, distance produced by it will be more reasonable than the other metrics proposed for categorical data. Moreover, it is parameter-free and can be easily applied to different ordinal data clustering tasks. Experimental results show the promising advantages of the proposed distance metric.
This work was fully supported by Faculty Research Grant of Hong Kong Baptist University under Project FRG2/17-18/082.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Agresti, A.: An Introduction to Categorical Data Analysis. Wiley, New York (1996)
Ahmad, A., Dey, L.: A method to compute distance between two categorical values of same attribute in unsupervised learning for categorical data set. Pattern Recognit. Lett. 28(1), 110–118 (2007)
Allen, I.E., Seaman, C.A.: Likert scales and data analyses. Qual. Prog. 40(7), 64–65 (2007)
Cheung, Y.m.: k*-means: a new generalized k-means clustering algorithm. Pattern Recognit. Lett. 24(15), 2883–2893 (2003)
Cheung, Y.m.: Maximum weighted likelihood via rival penalized EM for density mixture clustering with automatic model selection. IEEE Trans. Knowl. Data Eng. 17(6), 750–761 (2005)
Danielsson, P.E.: Euclidean distance mapping. Comput. Graph. Image Process. 14(3), 227–248 (1980)
Fayyad, U.M., Piatetsky-Shapiro, G., Smyth, P., Uthurusamy, R.: Advances in Knowledge Discovery and Data Mining. MIT Press (1996)
Ienco, D., Pensa, R.G., Meo, R.: Context-based distance learning for categorical data clustering. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 83–94. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03915-7_8
Jaynes, E.T.: Information theory and statistical mechanics. Phys. Rev. 106(4), 620–630 (1957)
Jia, H., Cheung, Y.m., Liu, J.: A new distance metric for unsupervised learning of categorical data. IEEE Trans. Neural Netw. Learn. Syst. 27(5), 1065–1079 (2016)
Krieger, A.M., Green, P.E.: A generalized rand-index method for consensus clustering of separate partitions of the same data base. J. Classification 16(1), 63–89 (1999)
Law, L.t., Cheung, Y.m.: Color image segmentation using rival penalized controlled competitive learning. In: Proceedings of the International Conference on Joint Neural Networks, pp. 108–112 (2003)
Le, S.Q., Ho, T.B.: An association-based dissimilarity measure for categorical data. Pattern Recognit. Lett. 26(16), 2549–2557 (2005)
Michalski, R.S., Carbonell, J.G., Mitchell, T.M.: Machine Learning: An Artificial Intelligence Approach. Springer Science & Business Media, Heidelberg (2013). https://doi.org/10.1007/978-3-662-12405-5
Ng, M.K., Li, M.J., Huang, J.Z., He, Z.: On the impact of dissimilarity measure in k-modes clustering algorithm. IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 503–507 (2007)
Norouzi, M., Fleet, D.J., Salakhutdinov, R.R.: Hamming distance metric learning. In: Proceedings of the Advances in Neural Information Processing Systems, pp. 1061–1069 (2012)
Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1226–1238 (2005)
Rand, W.M.: Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 66(336), 846–850 (1971)
Xiang, S., Nie, F., Zhang, C.: Learning a mahalanobis distance metric for data clustering and classification. Pattern Recognit. 41(12), 3600–3612 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Y., Cheung, Ym. (2018). Exploiting Order Information Embedded in Ordered Categories for Ordinal Data Clustering. In: Ceci, M., Japkowicz, N., Liu, J., Papadopoulos, G., Raś, Z. (eds) Foundations of Intelligent Systems. ISMIS 2018. Lecture Notes in Computer Science(), vol 11177. Springer, Cham. https://doi.org/10.1007/978-3-030-01851-1_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-01851-1_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01850-4
Online ISBN: 978-3-030-01851-1
eBook Packages: Computer ScienceComputer Science (R0)