Skip to main content

Comparisons of ADABOOST, KNN, SVM and Logistic Regression in Classification of Imbalanced Dataset

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 545))

Abstract

Data mining classification techniques are affected by the presence of imbalances between classes of a response variable. The difficulty in handling the imbalanced data issue has led to an influx of methods, either resolving the imbalance issue at data or algorithmic level. The R programming language is one of the many tools available for data mining. This paper compares some classification algorithms in R for an imbalanced medical data set. The classifiers ADABOOST, KNN, SVM-RBF and logistic regression were applied to the original, random oversampling and undersampling data sets. Results show that ADABOOST, KNN and SVM-RBF exhibits over-fitting when applied to the original dataset. No overfitting occurs for the random oversampling dataset where by SVM-RBF has the highest accuracy (Training: 91.5%, Testing: 90.6%), sensitivity (Training :91.0%, Testing: 91.0%), specificity (Training: 92.0%,Testing: 90.2%) and precision (Training:91.9%, Testing 90.5%) for training and testing data set. For random undersampling, no overfitting occurs only for ADABOOST and logistic regression. Logistic regression is the most stable classifier exhibiting consistent training an testing results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ward, A.: Interest in Healthcare ’Big Data’ Grows. FT.com. ProQuest. Web (2014), February 10, 2015

    Google Scholar 

  2. He, H., Garcia, E.A.: Learning from Imbalanced Data. IEEE Transactions on Knowledge and Data Engineering 21(9), 1263–1284 (2009)

    Article  Google Scholar 

  3. Nikulin, V., McLachlan, G.J.: Classification of imbalanced marketing data with balanced random sets. In: JLMR Workshop and Conference Proceedings, vol. 7, pp. 89–100 (2009)

    Google Scholar 

  4. Ogwueleka, F.: Data Mining Application in Credit Card Fraud Detection System. J. Eng. Sci. Technol. 6(3), 311–322 (2011)

    Google Scholar 

  5. Mena, L., Gonzalez, J.A.: Machine learning for imbalanced datasets: application in medical diagnostic. In: Proceedings of the Nineteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS 2006), pp. 574–579 (2006)

    Google Scholar 

  6. Dubey, R., Zhou, J., Wang, Y., Thompson, P.M., Ye, J., and Alzheimer’s Disease Neuroimaging Initiative.: Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study. Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study. NeuroImage 87, 220–241 (2014)

    Google Scholar 

  7. Weiss, G.M.: Foundations of imbalanced learning. In: He, H., Ma, Y. (eds.) Imbalanced Learning, Foundations, Algorithms, Applications, 1st edn., pp. 13–42. Wiley and IEEE Press, New Jersey (2013)

    Chapter  Google Scholar 

  8. Bekkar, M., Alitouche, T.A.: Imbalanced Data Learning Approaches Review. International Journal of Data Mining and Knowledge Management Process (IJDKP) 3(4), 15–33 (2013)

    Article  Google Scholar 

  9. Bekkar, M., Djemaa, H.K., Alitouche, T.A.: Evaluation Measures for Models Assessment over Imbalanced Data Sets. Journal of Information Engineering and Applications 3(10), 27–39 (2013)

    Google Scholar 

  10. Estabrooks, A., Japkowicz, N.: A mixture-of-experts framework for learning from unbalanced data sets. In: Hoffmann, F., Adams, N., Fisher, D., Guimarães, G., Hand, D.J. (eds.) IDA 2001. LNCS, vol. 2189, pp. 34–43. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  11. Japkowicz, N.: Learning from imbalanced data sets: A comparison of various strategies. In: AAAI Workshop on Learning from Imbalanced Data Sets, pp. 1–5 (2000)

    Google Scholar 

  12. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research 16, 321–357 (2002)

    MATH  Google Scholar 

  13. Chang, Y.: Boosting SVM classifiers with logistic regression, pp. 1–16 (1995). See www.stat.sinica.edu.tw/library/c_tec_rep/2003 (2003)

  14. Everitt, B.S., Hothorn, T.: Logistic regression and generalised linear models: blood screening, womens role in society, and colonic polyps. In: A Handbook of Statistical Analyses Using R, 1st edn., pp. 97–112. Taylor and Francis Group (LLC), London (2006)

    Chapter  Google Scholar 

  15. Jiang, X., El-Kareh, R., Ohno-Machado, L.: Improving predictions in imbalanced data using pairwise expanded logistic regression. In: Annual Symposium Proceedings / AMIA Symposium. AMIA Symposium, 2011, pp. 625–634 (2011)

    Google Scholar 

  16. Sathian, B.: Reporting dichotomous data using Logistic Regression in Medical Research: The scenario in developing countries. Nepal Journal of Epidemiology 1(4), 111–113 (2011)

    Google Scholar 

  17. Peng, C.-Y.J., Lee, K.L., Ingersoll, G.M.: An Introduction to Logistic Regression Analysis and Reporting. The Journal of Educational Research 96(1) (2010)

    Google Scholar 

  18. Kubat, M., Matwin, S.: Addressing the curse of imbalanced training sets: one-sided selection. In: Proceedings of the Fourteenth International Conference on Machine Learning, vol. 4, pp. 179–186 (1997)

    Google Scholar 

  19. Batista, G., Prati, R.C., Monard, M.C.: A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter 6(1), 20 (2004)

    Article  Google Scholar 

  20. Estabrooks, A., Jo, T., Japkowicz, N.: A Multiple Resampling Method for Learning from Imbalanced Data Sets. Computational Intelligence 20(1), 18–36 (2004)

    Article  MathSciNet  Google Scholar 

  21. Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 139, 105–139 (1999)

    Article  Google Scholar 

  22. Freund, Y., Schapire, R.E., Hill, M.: Experiments with a new boosting algorithm. In: 13th International Conference on Machine Learning (1996)

    Google Scholar 

  23. Han, J., Kamber, M.: Data Mining Concepts and Techniques (A. Stephan, Ed.), 2nd edn., vol. 40. Morgan Kaufmann Publishers Inc and Elsevier Inc., San Francisco (2006)

    MATH  Google Scholar 

  24. Akbani, R., Kwek, S., Japkowicz, N.: Applying support vector machines to imbalanced datasets. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) ECML 2004. LNCS (LNAI), vol. 3201, pp. 39–50. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  25. Auria, L., Moro, R.A.: Support Vector Machines (SVM) as a Technique for Solvency Analysis, pp. 1–16. Discussion Papers of Deutsches Institute of Wirtschaftsforschung, Berlin (2008)

    Google Scholar 

  26. Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. Pattern Recognition, 3–7 (2004)

    Google Scholar 

  27. Jiang, X., El-Kareh, R., Ohno-Machado, L.: Improving predictions in imbalanced data using pairwise expanded logistic regression. In: Annual Symposium Proceedings (AMIA Symposium), pp. 625–634 (2011)

    Google Scholar 

  28. Yap, B.W., Rahman, H.A.A., He, H., Bulgiba, A.: Handling imbalanced dataset using SVM and k-NN approach. In: Simposium Kebangsaan Sains Matematik (SKSM22) (2014) (in Press)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hezlin Aryani Abd Rahman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media Singapore

About this paper

Cite this paper

Rahman, H.A.A., Wah, Y.B., He, H., Bulgiba, A. (2015). Comparisons of ADABOOST, KNN, SVM and Logistic Regression in Classification of Imbalanced Dataset. In: Berry, M., Mohamed, A., Yap, B. (eds) Soft Computing in Data Science. SCDS 2015. Communications in Computer and Information Science, vol 545. Springer, Singapore. https://doi.org/10.1007/978-981-287-936-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-287-936-3_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-287-935-6

  • Online ISBN: 978-981-287-936-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics