Advertisement

Classification of Heterogeneous Data Based on Data Type Impact on Similarity

  • Najat Ali
  • Daniel Neagu
  • Paul Trundle
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 840)

Abstract

Real-world datasets are increasingly heterogeneous, showing a mixture of numerical, categorical and other feature types. The main challenge for mining heterogeneous datasets is how to deal with heterogeneity present in the dataset records. Although some existing classifiers (such as decision trees) can handle heterogeneous data in specific circumstances, the performance of such models may be still improved, because heterogeneity involves specific adjustments to similarity measurements and calculations. Moreover, heterogeneous data is still treated inconsistently and in ad-hoc manner. In this paper, we study the problem of heterogeneous data classification: our purpose is to use heterogeneity as a positive feature of the data classification effort by using consistently the similarity between data objects. We address the heterogeneity issue by studying the impact of mixing data types in the calculation of data objects’ similarity. To reach our goal, we propose an algorithm to divide the initial data records based on pairwise similarity for classification subtasks with the aim to increase the quality of the data subsets and apply specialized classifier models on them. The performance of the proposed approach is evaluated on 10 publicly available heterogeneous data sets. The results show that the models achieve better performance for heterogeneous datasets when using the proposed similarity process.

Keywords

Heterogeneous datasets Similarity measures Two-dimensional similarity space Classification algorithms 

References

  1. 1.
    Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques. Elsevier, Waltham (2011)zbMATHGoogle Scholar
  2. 2.
    Sarle, W.S.: Finding Groups in Data: An Introduction to Cluster Analysis. Wiley, New York (1991). JSTORGoogle Scholar
  3. 3.
    Myatt, G.J., Johnson, W.P.: Making Sense of Data II: A Practical Guide to Data Visualization, Advanced Data Mining Methods, and Applications. Wiley, Cambridge (2009)CrossRefGoogle Scholar
  4. 4.
    Deza, M.M., Deza, E.: Distances and similarities in data analysis. In: Encyclopedia of Distances, pp. 291–305. Springer, Heidelberg (2013)Google Scholar
  5. 5.
    Gower, J.C.: A general coefficient of similarity and some of its properties. Biometrics 27, 857–871 (1971)CrossRefGoogle Scholar
  6. 6.
    Ottaway, B.: Mixed data classification in archaeology. Revue d’Archéométrie 5(1), 139–144 (1981)CrossRefGoogle Scholar
  7. 7.
    Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)Google Scholar
  8. 8.
    Stone, C.J.: Classification and Regression Trees, vol. 8, pp. 452–456. Wadsworth International Group, Belmont (1984)Google Scholar
  9. 9.
    Salzberg, S.L.: C4. 5: Programs for machine learning by J. Ross Quinlan. Morgan Kaufmann Publishers, Inc., 1993. Mach. Learn. 16(3), 235–240 (1994)MathSciNetGoogle Scholar
  10. 10.
    Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967)CrossRefGoogle Scholar
  11. 11.
    Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554–2558 (1982)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (2013)Google Scholar
  13. 13.
    John, G.H., Langley, P.: Estimating continuous distributions in Bayesian classifiers. In: Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc. (1995)Google Scholar
  14. 14.
    Hu, L.-Y., et al.: The distance function effect on k-nearest neighbor classification for medical datasets. SpringerPlus 5(1), 1304 (2016)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Chandrasekar, P., et al.: Improving the prediction accuracy of decision tree mining with data preprocessing. In: 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC). IEEE (2017)Google Scholar
  16. 16.
    Pereira, C.L., Cavalcanti, G.D., Ren, T.I.: A new heterogeneous dissimilarity measure for data classification. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI). IEEE (2010)Google Scholar
  17. 17.
    Jin, R., Liu, H.: A novel approach to model generation for heterogeneous data classificationGoogle Scholar
  18. 18.
    Hsu, C.-C., Huang, Y.-P., Chang, K.-W.: Extended Naive Bayes classifier for mixed data. Expert Syst. Appl. 35(3), 1080–1083 (2008)CrossRefGoogle Scholar
  19. 19.
    Li, X., Ye, N.: A supervised clustering and classification algorithm for mining data with mixed variables. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 36(2), 396–406 (2006)CrossRefGoogle Scholar
  20. 20.
    Sun, Y., Karray, F., Al-Sharhan, S.: Hybrid soft computing techniques for heterogeneous data classification. In: Proceedings of the 2002 IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2002. IEEE (2002)Google Scholar
  21. 21.
    Frank, A., Asuncion, A.: UCI Machine Learning Repository. http://archive.ics.uci.edu/ml. University of California. School of Information and Computer Science, Irvine, p. 213 (2010)
  22. 22.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Artificial Intelligence Research (AIRe) Group, Faculty of Engineering and InformaticsUniversity of BradfordBradfordUK

Personalised recommendations