Advertisement

Kernel classification using a linear programming approach

  • Alexander M. MalyscheffEmail author
  • Theodore B. Trafalis
Article
  • 1 Downloads

Abstract

A support vector machine (SVM) classifier corresponds in its most basic form to a quadratic programming problem. Various linear variations of support vector classification have been investigated such as minimizing the L1-norm of the weight-vector instead of the L2-norm. In this paper we introduce a classifier where we minimize the boundary (lower envelope) of the epigraph that is generated over a set of functions, which can be interpreted as a measure of distance or slack from the origin. The resulting classifier appears to provide a generalization performance similar to SVMs while displaying a more advantageous computational complexity. The discussed formulation can also be extended to allow for cases with imbalanced data.

Keywords

Kernel methods Classification Linear programming 

Mathematics Subject Classification (2010)

68T05 90C05 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

Theodore Trafalis work has been conducted at the National Research Institute University Higher School of Economics and has been supported by the RSF grant n. 14-41-00039.

References

  1. 1.
    Bi, J., Bennett, K.P.: Duality, geometry, and support vector regression. In: Advances in Neural Information Processing Systems, pp. 593–600 (2002)Google Scholar
  2. 2.
    Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM (1992)Google Scholar
  3. 3.
    Burges, C.J.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  4. 4.
    Chvȧtal, V.: Linear Programming. Macmillan (1983)Google Scholar
  5. 5.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  6. 6.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge (2000)CrossRefzbMATHGoogle Scholar
  7. 7.
    Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository. http://archive.ics.uci.edu/ml (2017)
  8. 8.
    Gurobi Optimization Inc.: Gurobi Optimizer Reference Manual, Houston, Texas. http://www.gurobi.com (2016)
  9. 9.
    Herbrich, R., Graepel, T., Campbell, C.: Robust Bayes Point Machines. In: ESANN, pp. 49–54. Citeseer (2000)Google Scholar
  10. 10.
    Malyscheff, A.M.: Optimization issues in data analysis: an analytic center approach to kernel methods. Ph.D thesis (2001)Google Scholar
  11. 11.
    Malyscheff, A.M., Trafalis, T.B.: An analytic center machine for regression. CEJOR 10(4), 297–334 (2002)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Mercer, J.: Functions of positive and negative type, and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or Physical Character 209, 415–446 (1909)zbMATHGoogle Scholar
  13. 13.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  14. 14.
    van Rossum, G.: Python Reference Manual. CWI, Centre for Mathematics and Computer Science, Amsterdam (1995)Google Scholar
  15. 15.
    Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2001)Google Scholar
  16. 16.
    Trafalis, T.B., Malyscheff, A.M.: An analytic center machine. Mach. Learn. 46(1-3), 203–223 (2002)CrossRefzbMATHGoogle Scholar
  17. 17.
    Vapnik, V.: The Nature of Statistical Learning. Springer, New York (1995)CrossRefzbMATHGoogle Scholar
  18. 18.
    Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)zbMATHGoogle Scholar
  19. 19.
    Xanthopoulos, P., Guarracino, M.R., Pardalos, P.M.: Robust generalized eigenvalue classifier with ellipsoidal uncertainty. Ann. Oper. Res. 216(1), 327–342 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Xanthopoulos, P., Pardalos, P.M., Trafalis, T.B.: Robust Data Mining. Springer Science & Business Media, Berlin (2012)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Electrical and Computer EngineeringUniversity of OklahomaNormanUSA
  2. 2.School of Industrial and Systems EngineeringUniversity of OklahomaNormanUSA

Personalised recommendations