Advertisement

Support Vector Machines

  • Ke-Lin DuEmail author
  • M. N. S. Swamy
Chapter

Abstract

SVM is one of the most popular nonparametric classification algorithms. It is optimal and is based on computational learning theory. This chapter is dedicated to SVM. We first introduce the SVM model. Training methods for classification, clustering, and regression using SVM are introduced in detail. Associated topics such as model architecture optimization are also described.

References

  1. 1.
    Adankon, M. M., Cheriet, M., & Biem, A. (2009). Semisupervised least squares support vector machine. IEEE Transactions on Neural Networks, 20(12), 1858–1870.CrossRefGoogle Scholar
  2. 2.
    Aiolli, F., & Sperduti, A. (2005). Multiclass classification with multi-prototype support vector machines. Journal of Machine Learning Research, 6, 817–850.MathSciNetzbMATHGoogle Scholar
  3. 3.
    Alabdulmohsin, I., Zhang, X., & Gao, X. (2014). Support vector machines with indefinite kernels. In JMLR Workshop and Conference Proceedings: Asian Conference on Machine Learning (Vol. 39, pp. 32–47).Google Scholar
  4. 4.
    Allwein, E. L., Schapire, R. E., & Singer, Y. (2000). Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1, 113–141.MathSciNetzbMATHGoogle Scholar
  5. 5.
    Angiulli, F., & Astorino, A. (2010). Scaling up support vector machines using nearest neighbor condensation. IEEE Transactions on Neural Networks, 21(2), 351–357.CrossRefGoogle Scholar
  6. 6.
    Baker, J. L. (2003). Is there a support vector machine hiding in the dentate gyrus? Neurocomputing, 52–54, 199–207.MathSciNetCrossRefGoogle Scholar
  7. 7.
    Barbero, A., Takeda, A., & Lopez, J. (2015). Geometric intuition and algorithms for E\(\nu \)-SVM. Journal of Machine Learning Research, 16, 323–369.MathSciNetzbMATHGoogle Scholar
  8. 8.
    Belkin, M., Niyogi, P., & Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from examples. Journal of Machine Learning Research, 7, 2399–2434.MathSciNetzbMATHGoogle Scholar
  9. 9.
    Ben-Hur, A., Horn, D., Siegelmann, H., & Vapnik, V. (2001). Support vector clustering. Journal of Machine Learning Research, 2, 125–137.zbMATHGoogle Scholar
  10. 10.
    Bo, L., Wang, L., & Jiao, L. (2007). Recursive finite Newton algorithm for support vector regression in the primal. Neural Computation, 19(4), 1082–1096.MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Bordes, A., Ertekin, S., Wesdon, J., & Bottou, L. (2005). Fast kernel classifiers for online and active learning. Journal of Machine Learning Research, 6, 1579–1619.MathSciNetzbMATHGoogle Scholar
  12. 12.
    Bordes, A., Bottou, L., & Gallinari, P. (2009). SGD-QN: Careful quasi-Newton stochastic gradient descent. Journal of Machine Learning Research, 10, 1737–1754.MathSciNetzbMATHGoogle Scholar
  13. 13.
    Bordes, A., Bottou, L., Gallinari, P., Chang, J., & Smith, S. A. (2010). Erratum: SGDQN is less careful than expected. Journal of Machine Learning Research, 11, 2229–2240.zbMATHGoogle Scholar
  14. 14.
    Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the 5th ACM Annals Workshop on Computational Learning Theory (COLT) (pp. 144–152).Google Scholar
  15. 15.
    Bottou, L., & Bousquet, O. (2007). The tradeoffs of large scale learning. Advances in neural information processing systems (Vol. 20, pp. 161–168). Cambridge: MIT Press.Google Scholar
  16. 16.
    Bouboulis, P., Theodoridis, S., Mavroforakis, C., & Evaggelatou-Dalla, L. (2015). Complex support vector machines for regression and quaternary classification. IEEE Transactions on Neural Networks and Learning Systems, 26(6), 1260–1274.MathSciNetCrossRefGoogle Scholar
  17. 17.
    Burges, C. J. C. (1996). Simplified support vector decision rules. In Proceedings of 13th International Conference on Machine Learning (pp. 71–77). Bari, Italy.Google Scholar
  18. 18.
    Burges, C. J. C. (1998). A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2, 121–167.CrossRefGoogle Scholar
  19. 19.
    Cauwenberghs, G., & Poggio, T. (2001). Incremental and decremental support vector machine learning. In T. K. Leen, T. G. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 409–415). Cambridge: MIT Press.Google Scholar
  20. 20.
    Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: A library for support vector machines. Technical Report, Department of Computer Science and Information Engineering, National Taiwan University.Google Scholar
  21. 21.
    Chang, C.-C., & Lin, C.-J. (2001). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 13(9), 2119–2147.zbMATHCrossRefGoogle Scholar
  22. 22.
    Chang, C. C., & Lin, C. J. (2002). Training \(\nu \)-support vector regression: Theory and algorithms. Neural Computation, 14, 1959–1977.zbMATHCrossRefGoogle Scholar
  23. 23.
    Chang, M.-W., & Lin, C.-J. (2005). Leave-one-out bounds for support vector regression model selection. Neural Computation, 17, 1188–1222.zbMATHCrossRefGoogle Scholar
  24. 24.
    Chang, K.-W., Hsieh, C.-J., & Lin, C.-J. (2008). Coordinate descent method for large-scale L2-loss linear support vector machines. Journal of Machine Learning Research, 9, 1369–1398.MathSciNetzbMATHGoogle Scholar
  25. 25.
    Chang, F., Guo, C.-Y., Lin, X.-R., & Lu, C.-J. (2010). Tree decomposition for large-scale SVM problems. Journal of Machine Learning Research, 11, 2935–2972.MathSciNetzbMATHGoogle Scholar
  26. 26.
    Chapelle, O., & Zien, A. (2005). Semi-supervised classification by low density separation. In Proceedings of the 10th International Workshop on Artificial Intelligence Statistics (pp. 57–64).Google Scholar
  27. 27.
    Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19(5), 1155–1178.MathSciNetzbMATHCrossRefGoogle Scholar
  28. 28.
    Chapelle, O., Sindhwani, V., & Keerthi, S. S. (2008). Optimization techniques for semi-supervised support vector machines. Journal of Machine Learning Research, 9, 203–233.zbMATHGoogle Scholar
  29. 29.
    Chen, P.-H., Fan, R.-E., & Lin, C.-J. (2006). A study on SMO-type decomposition methods for support vector machines. IEEE Transactions on Neural Networks, 17(4), 893–908.CrossRefGoogle Scholar
  30. 30.
    Cheong, S., Oh, S. H., & Lee, S.-Y. (2004). Support vector machines with binary tree architecture for multi-class classification. Neural Information Processing - Letters and Reviews, 2(3), 47–51.Google Scholar
  31. 31.
    Chew, H. G., Bogner, R. E., & Lim, C. C. (2001). Dual-\(\nu \) support vector machine with error rate and training size biasing. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1269–1272).Google Scholar
  32. 32.
    Chiang, J.-H., & Hao, P.-Y. (2003). A new kernel-based fuzzy clustering approach: Support vector clustering with cell growing. IEEE Transactions on Fuzzy Systems, 11(4), 518–527.CrossRefGoogle Scholar
  33. 33.
    Choi, Y.-S. (2009). Least squares one-class support vector machine. Pattern Recognition Letters, 30, 1236–1240.CrossRefGoogle Scholar
  34. 34.
    Chu, W., Ong, C. J., & Keerthy, S. S. (2005). An improved conjugate gradient method scheme to the solution of least squares SVM. IEEE Transactions on Neural Networks, 16(2), 498–501.CrossRefGoogle Scholar
  35. 35.
    Chu, W., & Keerthi, S. S. (2007). Support vector ordinal regression. Neural Computation, 19, 792–815.MathSciNetzbMATHCrossRefGoogle Scholar
  36. 36.
    Collobert, R., & Bengio, S. (2001). SVMTorch: Support vector machines for large-scale regression problems. Journal of Machine Learning Research, 1, 143–160.MathSciNetzbMATHGoogle Scholar
  37. 37.
    Collobert, R., Sinz, F., Weston, J., & Bottou, L. (2006). Large scale transductive SVMs. Journal of Machine Learning Research, 7, 1687–1712.MathSciNetzbMATHGoogle Scholar
  38. 38.
    Cortes, C., & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 1–25.zbMATHGoogle Scholar
  39. 39.
    Cox, D., & O’Sullivan, F. (1990). Asymptotic analysis of penalized likelihood and related estimators. Annals of Statistics, 18, 1676–1695.MathSciNetzbMATHCrossRefGoogle Scholar
  40. 40.
    Crammer, K., & Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2, 265–292.zbMATHGoogle Scholar
  41. 41.
    Davenport, M. A., Baraniuk, R. G., & Scott, C. D. (2010). Tuning support vector machines for minimax and Neyman-Pearson classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10), 1888–1898.CrossRefGoogle Scholar
  42. 42.
    de Kruif, B. J., & de Vries, T. J. A. (2003). Pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 14(3), 696–702.CrossRefGoogle Scholar
  43. 43.
    Dietterich, T., & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2, 263–286.zbMATHCrossRefGoogle Scholar
  44. 44.
    Downs, T., Gates, K. E., & Masters, A. (2001). Exact simplification of support vector solutions. Journal of Machine Learning Research, 2, 293–297.zbMATHGoogle Scholar
  45. 45.
    Drineas, P., & Mahoney, M. W. (2005). On the Nystrom method for approximating a gram matrix for improved kernel-based learning. Journal of Machine Learning Research, 6, 2153–2175.MathSciNetzbMATHGoogle Scholar
  46. 46.
    Dufrenois, F., Colliez, J., & Hamad, D. (2009). Bounded influence support vector regression for robust single-model estimation. IEEE Transactions on Neural Networks, 20(11), 1689–1706.CrossRefGoogle Scholar
  47. 47.
    Ertekin, S., Bottou, L., & Giles, C. L. (2011). Nonconvex online support vector machines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(2), 368–381.CrossRefGoogle Scholar
  48. 48.
    Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9, 1871–1874.zbMATHGoogle Scholar
  49. 49.
    Fan, R.-E., Chen, P.-H., & Lin, C.-J. (2005). Working set selection using second order information for training support vector machines. Journal of Machine Learning Research, 6, 1889–1918.MathSciNetzbMATHGoogle Scholar
  50. 50.
    Fei, B., & Liu, J. (2006). Binary tree of SVM: A new fast multiclass training and classification algorithm. IEEE Transactions on Neural Networks, 17(3), 696–704.MathSciNetCrossRefGoogle Scholar
  51. 51.
    Ferris, M. C., & Munson, T. S. (2000). Interior point methods for massive support vector machines. Technical Report 00-05. Madison, WI: Computer Sciences Department, University of Wisconsin.Google Scholar
  52. 52.
    Fine, S., & Scheinberg, K. (2001). Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2, 243–264.zbMATHGoogle Scholar
  53. 53.
    Flake, G. W., & Lawrence, S. (2002). Efficient SVM regression training with SMO. Machine Learning, 46, 271–290.zbMATHCrossRefGoogle Scholar
  54. 54.
    Franc, V., & Sonnenburg, S. (2009). Optimized cutting plane algorithm for large-scale risk minimization. Journal of Machine Learning Research, 10, 2157–2192.MathSciNetzbMATHGoogle Scholar
  55. 55.
    Friess, T., Cristianini, N., & Campbell, C. (1998). The kernel-adatron algorithm: A fast and simple learning procedure for support vector machines. In Proceedings of the 15th International Conference on Machine Learning (pp. 188–196). Madison, WI.Google Scholar
  56. 56.
    Fung, G., & Mangasarian, O. (2001). Proximal support vector machines. In Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 77–86). San Francisco, CA.Google Scholar
  57. 57.
    Fung, G., & Mangasarian, O. (2001). Semi-supervised support vector machines for unlabeled data classification. Optimization Methods and Software, 15, 29–44.zbMATHCrossRefGoogle Scholar
  58. 58.
    Gentile, C. (2001). A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2, 213–242.MathSciNetzbMATHGoogle Scholar
  59. 59.
    Girosi, F. (1998). An equivalence between sparse approximation and support vector machines. Neural Computation, 10, 1455–1480.CrossRefGoogle Scholar
  60. 60.
    Glasmachers, T., & Igel, C. (2006). Maximum-gain working set selection for SVMs. Journal of Machine Learning Research, 7, 1437–1466.MathSciNetzbMATHGoogle Scholar
  61. 61.
    Glasmachers, T., & Igel, C. (2008). Second-order SMO improves SVM online and active learning. Neural Computation, 20, 374–382.MathSciNetzbMATHCrossRefGoogle Scholar
  62. 62.
    Grinblat, G. L., Uzal, L. C., Ceccatto, H. A., & Granitto, P. M. (2011). Solving nonstationary classification problems with coupled support vector machines. IEEE Transactions on Neural Networks, 22(1), 37–51.CrossRefGoogle Scholar
  63. 63.
    Gu, B., Sheng, V. S., Tay, K. Y., Romano, W., & Li, S. (2015). Incremental support vector learning for ordinal regression. IEEE Transactions on Neural Networks and Learning Systems, 26(7), 1403–1416.MathSciNetCrossRefGoogle Scholar
  64. 64.
    Gu, B., Sheng, V. S., Wang, Z., Ho, D., Osman, S., & Li, S. (2015). Incremental learning for \(\nu \)-support vector regression. Neural Networks, 67, 140–150.zbMATHCrossRefGoogle Scholar
  65. 65.
    Gu, B., Wang, J.-D., Yu, Y.-C., Zheng, G.-S., Huang, Y. F., & Xu, T. (2012). Accurate on-line \(\nu \)-support vector learning. Neural Networks, 27, 51–59.zbMATHCrossRefGoogle Scholar
  66. 66.
    Gunter, L., & Zhu, J. (2007). Efficient computation and model selection for the support vector regression. Neural Computation, 19, 1633–1655.MathSciNetzbMATHCrossRefGoogle Scholar
  67. 67.
    Haasdonk, B. (2005). Feature space interpretation of SVMs with indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4), 482–92.CrossRefGoogle Scholar
  68. 68.
    Haasdonk, B., & Pekalska, E. (2008). Indefinite kernel Fisher discriminant. In Proceedings of the 19th International Conference on Pattern Recognition (pp. 1–4). Tampa, FL.Google Scholar
  69. 69.
    Hammer, B., & Gersmann, K. (2003). A note on the universal approximation capability of support vector machines. Neural Processing Letters, 17, 43–53.CrossRefGoogle Scholar
  70. 70.
    Hao, P.-Y. (2010). New support vector algorithms with parametric insensitive/margin model. Neural Networks, 23, 60–73.zbMATHCrossRefGoogle Scholar
  71. 71.
    Hao, P.-Y. (2017). Pair-\(\nu \)-SVR: A novel and efficient pairing \(\nu \)-support vector regression algorithm. IEEE Transactions on Neural Networks and Learning Systems, 28(11), 2503–2515.MathSciNetCrossRefGoogle Scholar
  72. 72.
    Hastie, T., Rosset, S., Tibshirani, R., & Zhu, J. (2004). The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5, 1391–1415.MathSciNetzbMATHGoogle Scholar
  73. 73.
    Hsu, C.-W., & Lin, C.-J. (2002). A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2), 415–425.CrossRefGoogle Scholar
  74. 74.
    Huang, K., Jiang, H., & Zhang, X.-Y. (2017). Field support vector machines. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(6), 454–463.CrossRefGoogle Scholar
  75. 75.
    Huang, K., Yang, H., King, I., & Lyu, M. R. (2008). Maxi-min margin machine: learning large margin classifiers locally and globally. IEEE Transactions on Neural Networks, 19(2), 260–272.CrossRefGoogle Scholar
  76. 76.
    Huang, K., Zheng, D., Sun, J., Hotta, Y., Fujimoto, K., & Naoi, S. (2010). Sparse learning for support vector classification. Pattern Recognition Letters, 31, 1944–1951.CrossRefGoogle Scholar
  77. 77.
    Huang, X., Shi, L., & Suykens, J. A. K. (2014). Support vector machine classifier with pinball loss. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5), 984–997.CrossRefGoogle Scholar
  78. 78.
    Hush, D., & Scovel, C. (2003). Polynomial-time decomposition algorithms for support vector machines. Machine Learning, 51, 51–71.zbMATHCrossRefGoogle Scholar
  79. 79.
    Hush, D., Kelly, P., Scovel, C., & Steinwart, I. (2006). QP Algorithms with guaranteed accuracy and run time for support vector machines. Journal of Machine Learning Research, 7, 733–769.MathSciNetzbMATHGoogle Scholar
  80. 80.
    Ikeda, K., & Murata, N. (2005). Geometrical properties of Nu support vector machines with different norms. Neural Computation, 17, 2508–2529.MathSciNetzbMATHCrossRefGoogle Scholar
  81. 81.
    Ito, N., Takeda, A., & Toh, K.-C. (2017). A unified formulation and fast accelerated proximal gradient method for classification. Journal of Machine Learning Research, 18, 1–49.MathSciNetzbMATHGoogle Scholar
  82. 82.
    Jandel, M. (2010). A neural support vector machine. Neural Networks, 23, 607–613.CrossRefGoogle Scholar
  83. 83.
    Jayadeva, Khemchandani, R., & Chandra, S. (2007). Twin support vector machines for pattern classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 905–910.Google Scholar
  84. 84.
    Jiao, L., Bo, L., & Wang, L. (2007). Fast sparse approximation for least squares support vector machine. IEEE Transactions on Neural Networks, 18(3), 685–697.CrossRefGoogle Scholar
  85. 85.
    Joachims, T. (1999). Making large-scale SVM learning practical. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 169–184). Cambridge: MIT Press.Google Scholar
  86. 86.
    Joachims, T. (1999). Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning (pp. 200–209). San Mateo: Morgan Kaufmann.Google Scholar
  87. 87.
    Joachims, T. (2006). Training linear SVMs in linear time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 217–226).Google Scholar
  88. 88.
    Joachims, T., Finley, T., & Yu, C.-N. J. (2009). Cutting-plane training of structural SVMs. Machine Learning, 77, 27–59.zbMATHCrossRefGoogle Scholar
  89. 89.
    Joachims, T., & Yu, C.-N. J. (2009). Sparse kernel SVMs via cutting-plane training. Machine Learning, 76, 179–193.CrossRefGoogle Scholar
  90. 90.
    Jung, K.-H., Lee, D., & Lee, J. (2010). Fast support-based clustering method for large-scale problems. Pattern Recognition, 43, 1975–1983.zbMATHCrossRefGoogle Scholar
  91. 91.
    Kao, W.-C., Chung, K.-M., Sun, C.-L., & Lin, C.-J. (2004). Decomposition methods for linear support vector machines. Neural Computation, 16, 1689–1704.zbMATHCrossRefGoogle Scholar
  92. 92.
    Karal, O. (2017). Maximum likelihood optimal and robust support vector regression with lncosh loss function. Neural Networks, 94, 1–12.CrossRefGoogle Scholar
  93. 93.
    Katagiri, S., & Abe, S. (2006). Incremental training of support vector machines using hyperspheres. Pattern Recognition Letters, 27, 1495–1507.CrossRefGoogle Scholar
  94. 94.
    Keerthi, S. S., Chapelle, O., & DeCoste, D. (2006). Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research, 7, 1493–1515.MathSciNetzbMATHGoogle Scholar
  95. 95.
    Keerthi, S. S., & DeCoste, D. (2005). A modified finite Newton method for fast solution of large scale linear SVMs. Journal of Machine Learning Research, 6, 341–361.MathSciNetzbMATHGoogle Scholar
  96. 96.
    Keerthi, S. S., & Gilbert, E. G. (2002). Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning, 46, 351–360.zbMATHCrossRefGoogle Scholar
  97. 97.
    Keerthi, S. S., & Shevade, S. K. (2003). SMO for least squares SVM formulations. Neural Computation, 15, 487–507.zbMATHCrossRefGoogle Scholar
  98. 98.
    Keerthi, S. S., Shevade, S. K., Bhattacharyya, C., & Murthy, K. R. K. (2001). Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Computation, 13(3), 637–649.zbMATHCrossRefGoogle Scholar
  99. 99.
    Khemchandan, R., Saigal, P., & Chandra, S. (2016). Improvements on \(\nu \)-twin support vector machine. Neural Networks, 79, 97–107.CrossRefGoogle Scholar
  100. 100.
    Klement, S., Anders, S., & Martinetz, T. (2013). The support feature machine: Classification with the least number of features and application to neuroimaging data. Neural Networks, 25(6), 1548–1584.MathSciNetzbMATHGoogle Scholar
  101. 101.
    Knebel, T., Hochreiter, S., & Obermayer, K. (2008). An SMO algorithm for the potential support vector machine. Neural Computation, 20, 271–287.MathSciNetzbMATHCrossRefGoogle Scholar
  102. 102.
    Kramer, K. A., Hall, L. O., Goldgof, D. B., Remsen, A., & Luo, T. (2009). Fast support vector machines for continuous data. IEEE Transactions on Systems, Man, and Cybernetics Part B, 39(4), 989–1001.CrossRefGoogle Scholar
  103. 103.
    Kressel, U. H.-G. (1999). Pairwise classification and support vector machines. In B. Scholkopf, C. J. C. Burges, & A. J. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 255–268). Cambridge: MIT Press.Google Scholar
  104. 104.
    Kuh, A., & De Wilde, P. (2007). Comments on pruning error minimization in least squares support vector machines. IEEE Transactions on Neural Networks, 18(2), 606–609.CrossRefGoogle Scholar
  105. 105.
    Laskov, P., Gehl, C., Kruger, S., & Muller, K.-R. (2006). Incremental support vector learning: Analysis, implementation and applications. Journal of Machine Learning Research, 7, 1909–1936.MathSciNetzbMATHGoogle Scholar
  106. 106.
    Lee, D., & Lee, J. (2007). Equilibrium-based support vector machine for semisupervised classification. IEEE Transactions on Neural Networks, 18(2), 578–583.CrossRefGoogle Scholar
  107. 107.
    Lee, K. Y., Kim, D.-W., Lee, K. H., & Lee, D. (2007). Density-induced support vector data description. IEEE Transactions on Neural Networks, 18(1), 284–289.CrossRefGoogle Scholar
  108. 108.
    Lee, Y.-J., Hsieh, W.-F., & Huang, C.-M. (2005). \(\varepsilon \)-SSVR: A smooth support vector machine for \(\varepsilon \)-insensitive regression. IEEE Transactions on Knowledge and Data Engineering, 17(5), 678–685.CrossRefGoogle Scholar
  109. 109.
    Lee, C.-P., & Lin, C.-J. (2014). Large-Scale Linear RankSVM. Neural Computation, 26, 781–817.MathSciNetzbMATHCrossRefGoogle Scholar
  110. 110.
    Lee, Y. J., & Mangasarian, O. L. (2001). RSVM: Reduced support vector machines. In Proceedings of the 1st SIAM International Conference on Data Mining (pp. 1–17). Chicago, IL.Google Scholar
  111. 111.
    Lee, Y. J., & Mangasarian, O. L. (2001). SSVM: A smooth support vector machine. Computational Optimization and Applications, 20(1), 5–22.MathSciNetzbMATHCrossRefGoogle Scholar
  112. 112.
    Li, B., Song, S., & Li, K. (2013). A fast iterative single data approach to training unconstrained least squares support vector machines. Neurocomputing, 115, 31–38.CrossRefGoogle Scholar
  113. 113.
    Liang, X. (2010). An effective method of pruning support vector machine classifiers. IEEE Transactions on Neural Networks, 21(1), 26–38.CrossRefGoogle Scholar
  114. 114.
    Liang, X., Chen, R.-C., & Guo, X. (2008). Pruning support vector machines without altering performances. IEEE Transactions on Neural Networks, 19(10), 1792–1803.CrossRefGoogle Scholar
  115. 115.
    Lin, C.-J. (2001). On the convergence of the decomposition method for support vector machines. IEEE Transactions on Neural Networks, 12(6), 1288–1298.CrossRefGoogle Scholar
  116. 116.
    Lin, C.-J. (2002). Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Transactions on Neural Networks, 13(1), 248–250.CrossRefGoogle Scholar
  117. 117.
    Lin, C.-J., Weng, R. C., & Keerthi, S. S. (2008). Trust region Newton method for logistic regression. Journal of Machine Learning Research, 9, 627–650.MathSciNetzbMATHGoogle Scholar
  118. 118.
    Lin, Y.-L., Hsieh, J.-G., Wu, H.-K., & Jeng, J.-H. (2011). Three-parameter sequential minimal optimization for support vector machines. Neurocomputing, 74, 3467–3475.CrossRefGoogle Scholar
  119. 119.
    Loosli, G., & Canu, S. (2007). Comments on the core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 8, 291–301.zbMATHGoogle Scholar
  120. 120.
    Loosli, G., Canu, S., & Ong, C. S. (2016). Learning SVM in Krein spaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(6), 1204–1216.CrossRefGoogle Scholar
  121. 121.
    Luo, L., Xie, Y., Zhang, Z., & Li, W.-J. (2015). Support matrix machines. In Proceedings of the 32nd International Conference on Machine Learning. Lille, France.Google Scholar
  122. 122.
    Luss, R., & d’Aspremont, A. (2007). Support vector machine classification with indefinite kernels. Advances in Neural Information Processing Systems (Vol. 20, pp. 953–960). Vancouver, Canada.Google Scholar
  123. 123.
    Ma, J., Theiler, J., & Perkins, S. (2003). Accurate online support vector regression. Neural Computation, 15(11), 2683–2703.zbMATHCrossRefGoogle Scholar
  124. 124.
    Ma, Y., Liang, X., Kwok, J. T., Li, J., Zhou, X., & Zhang, H. (2018). Fast-solving quasi-optimal LS-S\(^3\)VM based on an extended candidate set. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1120–1131.MathSciNetCrossRefGoogle Scholar
  125. 125.
    Mall, R., & Suykens, J. A. K. (2015). Very sparse LSSVM reductions for large-scale data. IEEE Transactions on Neural Networks and Learning Systems, 26(5), 1086–1097.MathSciNetCrossRefGoogle Scholar
  126. 126.
    Manevitz, L. M., & Yousef, M. (2001). One-class SVMs for document classification. Journal of Machine Learning Research, 2, 139–154.zbMATHGoogle Scholar
  127. 127.
    Mangasarian, O. L. (2000). Generalized support vector machines. In A. Smola, P. Bartlett, B. Scholkopf, & D. Schuurmans (Eds.), Advances in large margin classifiers (pp. 135–146). Cambridge: MIT Press.Google Scholar
  128. 128.
    Mangasarian, O. L. (2002). A finite Newton method for classification. Optimization Methods and Software, 17(5), 913–929.MathSciNetzbMATHCrossRefGoogle Scholar
  129. 129.
    Mangasarian, O. L., & Musicant, D. R. (1999). Successive overrelaxation for support vector machines. IEEE Transactions on Neural Networks, 10(5), 1032–1037.CrossRefGoogle Scholar
  130. 130.
    Mangasarian, O. L., & Musicant, D. R. (2001). Lagrangian support vector machines. Journal of Machine Learning Research, 1, 161–177.MathSciNetzbMATHGoogle Scholar
  131. 131.
    Mangasarian, O. L., & Wild, E. W. (2006). Multisurface proximal support vector classification via generalized eigenvalues. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1), 69–74.CrossRefGoogle Scholar
  132. 132.
    Marchand, M., & Shawe-Taylor, J. (2002). The set covering machine. Journal of Machine Learning Research, 3, 723–746.MathSciNetzbMATHGoogle Scholar
  133. 133.
    Martin, M. (2002). On-line support vector machine regression. In Proceedings of the 13th European Conference on Machine Learning, LNAI (Vol. 2430, pp. 282–294). Berlin: Springer.Google Scholar
  134. 134.
    Melacci, S., & Belkin, M. (2011). Laplacian support vector machines trained in the primal. Journal of Machine Learning Research, 12, 1149–1184.MathSciNetzbMATHGoogle Scholar
  135. 135.
    Munoz, A., & de Diego, I. M. (2006). From indefinite to positive semidefinite matrices. In Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition (pp. 764–772).Google Scholar
  136. 136.
    Musicant, D. R., & Feinberg, A. (2004). Active set support vector regression. IEEE Transactions on Neural Networks, 15(2), 268–275.CrossRefGoogle Scholar
  137. 137.
    Nandan, M., Khargonekar, P. P., & Talathi, S. S. (2014). Fast SVM training using approximate extreme points. Journal of Machine Learning Research, 15, 59–98.MathSciNetzbMATHGoogle Scholar
  138. 138.
    Navia-Vazquez, A. (2007). Support vector perceptrons. Neurocomputing, 70, 1089–1095.CrossRefGoogle Scholar
  139. 139.
    Nguyen, D., & Ho, T. (2006). A bottom-up method for simplifying support vector solutions. IEEE Transactions on Neural Networks, 17(3), 792–796.CrossRefGoogle Scholar
  140. 140.
    Nguyen, D. D., Matsumoto, K., Takishima, Y., & Hashimoto, K. (2010). Condensed vector machines: Learning fast machine for large data. IEEE Transactions on Neural Networks, 21(12), 1903–1914.CrossRefGoogle Scholar
  141. 141.
    Niu, G., Dai, B., Shang, L., & Sugiyama, M. (2013). Maximum volume clustering: A new discriminative clustering approach. Journal of Machine Learning Research, 14, 2641–2687.MathSciNetzbMATHGoogle Scholar
  142. 142.
    Ojeda, F., Suykens, J. A. K., & Moor, B. D. (2008). Low rank updated LS-SVM classifiers for fast variable selection. Neural Networks, 21, 437–449.zbMATHCrossRefGoogle Scholar
  143. 143.
    Omitaomu, O. A., Jeong, M. K., & Badiru, A. B. (2011). Online support vector regression with varying parameters for time-dependent data. IEEE Transactions on Systems, Man, and Cybernetics Part A, 41(1), 191–197.CrossRefGoogle Scholar
  144. 144.
    Ong, C. S., Mary, X., Canu, S., & Smola, A. J. (2004). Learning with nonpositive kernels. In Proceedings of the 21th International Conference on Machine Learning (pp. 639–646). Banff, Canada.Google Scholar
  145. 145.
    Orabona, F., Castellini, C., Caputo, B., Jie, L., & Sandini, G. (2010). On-line independent support vector machines. Pattern Recognition, 43(4), 1402–1412.zbMATHCrossRefGoogle Scholar
  146. 146.
    Osuna, E., Freund, R., & Girosi, F. (1997). An improved training algorithm for support vector machines. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing (pp. 276–285). New York.Google Scholar
  147. 147.
    Osuna, E., Freund, R., & Girosi, F. (1997). Support vector machines: Training and applications. Technical Report A.I. Memo No. 1602, MIT Artificial Intelligence Laboratory.Google Scholar
  148. 148.
    Peng, X. (2010). TSVR: An efficient twin support vector machine for regression. Neural Networks, 23(3), 365–372.zbMATHCrossRefGoogle Scholar
  149. 149.
    Peng, X. (2010). Primal twin support vector regression and its sparse approximation. Neurocomputing, 73, 2846–2858.CrossRefGoogle Scholar
  150. 150.
    Peng, X. (2010). A \(\nu \)-twin support vector machine (\(\nu \)-TSVM) classifier and its geometric algorithms. Information Sciences, 180(20), 3863–3875.MathSciNetzbMATHCrossRefGoogle Scholar
  151. 151.
    Perez-Cruz, F., Navia-Vazquez, A., Rojo-Alvarez, J. L., & Artes-Rodriguez, A. (1999). A new training algorithm for support vector machines. In Proceedings of the 5th Bayona Workshop on Emerging Technologies in Telecommunications (pp. 116–120). Baiona, Spain.Google Scholar
  152. 152.
    Platt, J. (1999). Fast training of support vector machines using sequential minimal optimization. In B. Scholkopf, C. Burges, & A. Smola (Eds.), Advances in kernel methods - support vector learning (pp. 185–208). Cambridge: MIT Press.Google Scholar
  153. 153.
    Pontil, M., & Verri, A. (1998). Properties of support vector machines. Neural Computation, 10, 955–974.CrossRefGoogle Scholar
  154. 154.
    Qi, Z., Tian, Y., & Shi, Y. (2015). Successive overrelaxation for Laplacian support vector machine. IEEE Transactions on Neural Networks and Learning Systems, 26(4), 674–683.MathSciNetCrossRefGoogle Scholar
  155. 155.
    Rahimi, A., & Recht, B. (2007). Random features for large-scale kernel machines. In Advances in neural information processing systems (pp. 1177–1184).Google Scholar
  156. 156.
    Renjifo, C., Barsic, D., Carmen, C., Norman, K., & Peacock, G. S. (2008). Improving radial basis function kernel classification through incremental learning and automatic parameter selection. Neurocomputing, 72, 3–14.CrossRefGoogle Scholar
  157. 157.
    Rifkin, R., & Klautau, A. (2004). In defense of one-vs-all classification. Journal of Machine Learning Research, 5, 101–141.MathSciNetzbMATHGoogle Scholar
  158. 158.
    Roobaert, D. (2002). DirectSVM: A simple support vector machine perceptron. Journal of VLSI Signal Processing, 32, 147–156.zbMATHCrossRefGoogle Scholar
  159. 159.
    Scheinberg, K. (2006). An efficient implementation of an active set method for SVMs. Journal of Machine Learning Research, 7, 2237–2257.MathSciNetzbMATHGoogle Scholar
  160. 160.
    Schleif, F.-M., & Tino, P. (2017). Indefinite core vector machine. Pattern Recognition, 71, 187–195.CrossRefGoogle Scholar
  161. 161.
    Scholkopf, B., Smola, A. J., Williamson, R. C., & Bartlett, P. L. (2000). New support vector algorithm. Neural Computation, 12(5), 1207–1245.CrossRefGoogle Scholar
  162. 162.
    Scholkopf, B., Herbrich, R., & Smola, A. J. (2001). A generalized representer theorem. In Proceedings of the 14th Annual Conference on Computational Learning Theory, LNCS (Vol. 2111, pp. 416–426). Berlin: Springer.Google Scholar
  163. 163.
    Scholkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., & Williamson, R. (2001). Estimating the support of a high-dimensional distribution. Neural Computation, 13(7), 1443–1471.zbMATHCrossRefGoogle Scholar
  164. 164.
    Scholkopf, B., Mika, S., Burges, C. J. C., Knirsch, P., Muller, K. R., Ratsch, G., et al. (1999). Input space versus feature space in kernel-based methods. IEEE Transactions on Neural Networks, 5(10), 1000–1017.CrossRefGoogle Scholar
  165. 165.
    Schraudolph, N., Yu, J., & Gunter, S. (2007). A stochastic quasi-Newton method for online convex optimization. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (AIstats) (pp. 433–440). Society for AIstats.Google Scholar
  166. 166.
    Shalev-Shwartz, S., Singer, Y., & Srebro, N. (2007). Pegasos: Primal estimated sub-gradient solver for SVM. In Proceedings of the 24th International Conference on Machine Learning (ICML) (pp. 807–814). New York: ACM Press.Google Scholar
  167. 167.
    Shao, Y.-H., & Deng, N.-Y. (2012). A coordinate descent margin based-twin support vector machine for classification. Neural Networks, 25, 114–121.zbMATHCrossRefGoogle Scholar
  168. 168.
    Shao, Y.-H., Zhang, C.-H., Wang, X.-B., & Deng, N.-Y. (2011). Improvements on twin support vector machines. IEEE Transactions on Neural Networks, 22(6), 962–968.CrossRefGoogle Scholar
  169. 169.
    Shashua, A. A. (1999). On the equivalence between the support vector machine for classification and sparsified Fisher’s linear discriminant. Neural Processing Letters, 9(2), 129–139.CrossRefGoogle Scholar
  170. 170.
    Shashua, A., & Levin, A. (2002). Ranking with large margin principle: Two approaches. Advances in neural information processing systems (Vol. 15, pp. 937–944).Google Scholar
  171. 171.
    Shen, X., Niu, L., Qi, Z., & Tian, Y. (2017). Support vector machine classifier with truncated pinball loss. Pattern Recognition, 68, 199–210.CrossRefGoogle Scholar
  172. 172.
    Shevade, S. K., Keerthi, S. S., Bhattacharyya, C., & Murthy, K. R. K. (2000). Improvements to the SMO algorithm for SVM regression. IEEE Transactions on Neural Networks, 11(5), 1188–1193.CrossRefGoogle Scholar
  173. 173.
    Shi, Y., Chung, F.-L., & Wang, S. (2015). An improved TA-SVM method without matrix inversion and its fast implementation for nonstationary datasets. IEEE Transactions on Neural Networks and Learning Systems, 26(9), 2005–2018.MathSciNetCrossRefGoogle Scholar
  174. 174.
    Shin, H., & Cho, S. (2007). Neighborhood property-based pattern selection for support vector machines. Neural Computation, 19, 816–855.zbMATHCrossRefGoogle Scholar
  175. 175.
    Shilton, A., Palamiswami, M., Ralph, D., & Tsoi, A. (2005). Incremental training of support vector machines. IEEE Transactions on Neural Networks, 16, 114–131.CrossRefGoogle Scholar
  176. 176.
    Smola, A. J., & Scholkopf, B. (2000). Sparse greedy matrix approximation for machine learning. In Proceedings of the 17th International Conference on Machine Learning (pp. 911–918). Stanford University, CA.Google Scholar
  177. 177.
    Smola, A. J., & Scholkopf, B. (2004). A tutorial on support vector regression. Statistics and Computing, 14(3), 199–222.MathSciNetCrossRefGoogle Scholar
  178. 178.
    Suykens, J. A. K., & Vandewalle, J. (1999). Least squares support vector machine classifiers. Neural Processing Letters, 9, 293–300.CrossRefGoogle Scholar
  179. 179.
    Suykens, J. A. K., Lukas, L., Van Dooren, P., De Moor, B., & Vandewalle, J. (1999). Least squares support vector machine classifiers: A large scale algorithm. In Proceedings of European Conference on Circuit Theory and Design (pp. 839–842).Google Scholar
  180. 180.
    Suykens, J. A. K., Lukas, L., & Vandewalle, J. (2000). Sparse approximation using least squares support vector machines. In Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS) (Vol. 2, pp. 757–760). Genvea, Switzerland.Google Scholar
  181. 181.
    Suykens, J. A. K., Van Gestel, T., De Brabanter, J., De Moor, B., & Vandewalle, J. (2002). Least squares support vector machines. Singapore: World Scientific.Google Scholar
  182. 182.
    Suykens, J. A. K., De Brabanter, J., Lukas, L., & Vandewalle, J. (2002). Weighted least squares support vector machines: Robustness and sparse approximation. Neurocomputing, 48, 85–105.zbMATHCrossRefGoogle Scholar
  183. 183.
    Steinwart, I. (2003). Sparseness of support vector machines. Journal of Machine Learning Research, 4, 1071–1105.MathSciNetzbMATHGoogle Scholar
  184. 184.
    Takahashi, N., & Nishi, T. (2006). Global convergence of decomposition learning methods for support vector machines. IEEE Transactions on Neural Networks, 17(6), 1362–1369.CrossRefGoogle Scholar
  185. 185.
    Takahashi, N., Guo, J., & Nishi, T. (2008). Global convergence of SMO algorithm for support vector regression. IEEE Transactions on Neural Networks, 19(6), 971–982.CrossRefGoogle Scholar
  186. 186.
    Tan, Y., & Wang, J. (2004). A support vector machine with a hybrid kernel and minimal Vapnik-Chervonenkis dimension. IEEE Transactions on Knowledge and Data Engineering, 16(4), 385–395.CrossRefGoogle Scholar
  187. 187.
    Tax, D. M. J., & Duin, R. P. W. (1999). Support vector domain description. Pattern Recognition Letters, 20, 1191–1199.CrossRefGoogle Scholar
  188. 188.
    Tax, D. M. J. (2001). One-class classification: Concept-learning in the absence of counter-examples. Ph.D. dissertation. Delft, The Netherlands: Electrical Engineering, Mathematics and Computer Science, Delft University of Technology.Google Scholar
  189. 189.
    Tax, D. M. J., & Duin, R. P. W. (2004). Support vector data description. Machine Learning, 54, 45–66.zbMATHCrossRefGoogle Scholar
  190. 190.
    Teo, C. H., Smola, A., Vishwanathan, S. V., & Le, Q. V. (2007). A scalable modular convex solver for regularized risk minimization. In Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD) (pp. 727–736).Google Scholar
  191. 191.
    Teo, C. H., Vishwanthan, S. V. N., Smola, A., & Le, Q. (2010). Bundle methods for regularized risk minimization. Journal of Machine Learning Research, 11, 311–365.MathSciNetzbMATHGoogle Scholar
  192. 192.
    Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.MathSciNetzbMATHGoogle Scholar
  193. 193.
    Tong, S., & Koller, D. (2001). Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2, 45–66.zbMATHGoogle Scholar
  194. 194.
    Tsang, I. W., Kwok, J. T., & Cheung, P.-M. (2005). Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6, 363–392.MathSciNetzbMATHGoogle Scholar
  195. 195.
    Tsang, I. W.-H., Kwok, J. T.-Y., & Zurada, J. M. (2006). Generalized core vector machines. IEEE Transactions on Neural Networks, 17(5), 1126–1140.CrossRefGoogle Scholar
  196. 196.
    Tsang, I. W., Kocsor, A., & Kwok, J. T. (2007). Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning (pp. 911–918). Corvalis, OR.Google Scholar
  197. 197.
    Valizadegan, H., & Jin, R. (2007). Generalized maximum margin clustering and unsupervised kernel learning. Advances in neural information processing systems (Vol. 19, pp. 1417–1424). Cambridge: MIT Press.Google Scholar
  198. 198.
    Vapnik, V. N. (1982). Estimation of dependences based on empirical data. New York: Springer.Google Scholar
  199. 199.
    Vapnik, V. N. (1995). The nature of statistical learning theory. New York: Springer.Google Scholar
  200. 200.
    Vapnik, V. N. (1998). Statistical learning theory. New York: Wiley.Google Scholar
  201. 201.
    Vapnik, V., & Chapelle, O. (2000). Bounds on error expectation for support vector machines. Neural Computation, 12, 2013–2036.CrossRefGoogle Scholar
  202. 202.
    Vishwanathan, S. V. N., Smola, A. J., & Murty, M. N. (2003). SimpleSVM. In Proceedings of the 20th International Conference on Machine Learning (pp. 760–767). Washington, DC.Google Scholar
  203. 203.
    Wang, Z., & Chen, S. (2007). New least squares support vector machines based on matrix patterns. Neural Processing Letters, 26, 41–56.CrossRefGoogle Scholar
  204. 204.
    Wang, G., Yeung, D.-Y., & Lochovsky, F. H. (2008). A new solution path algorithm in support vector regression. IEEE Transactions on Neural Networks, 19(10), 1753–1767.CrossRefGoogle Scholar
  205. 205.
    Wang, F., Zhao, B., & Zhang, C. (2010). Linear time maximum margin clustering. IEEE Transactions on Neural Networks, 21(2), 319–332.CrossRefGoogle Scholar
  206. 206.
    Wang, Z., Shao, Y.-H., Bai, L., & Deng, N.-Y. (2015). Twin support vector machine for clustering. IEEE Transactions on Neural Networks and Learning Systems, 26(10), 2583–2588.MathSciNetCrossRefGoogle Scholar
  207. 207.
    Warmuth, M. K., Liao, J., Ratsch, G., Mathieson, M., Putta, S., & Lemmem, C. (2003). Support vector machines for active learning in the drug discovery process. Journal of Chemical Information Sciences, 43(2), 667–673.CrossRefGoogle Scholar
  208. 208.
    Weston, J., & Watkins, C. (1999). Multi-class support vector machines. In M. Verleysen (Ed.), Proceedings of European Symposium on Artificial Neural Networks. Brussels: D. Facto Press.Google Scholar
  209. 209.
    Williams, C. K. I., & Seeger, M. (2001). Using the Nystrom method to speed up kernel machines. In T. Leen, T. Dietterich, & V. Tresp (Eds.), Advances in neural information processing systems (Vol. 13, pp. 682–688). Cambridge: MIT Press.Google Scholar
  210. 210.
    Wu, Q., & Zhou, D.-X. (2005). SVM soft margin classifiers: Linear programming versus quadratic programming. Neural Computation, 17, 1160–1187.MathSciNetzbMATHCrossRefGoogle Scholar
  211. 211.
    Wu, M., Scholkopf, B., & Bakir, G. (2006). A direct method for building sparse kernel learning algorithms. Journal of Machine Learning Research, 7, 603–624.MathSciNetzbMATHGoogle Scholar
  212. 212.
    Xu, G., Hu, B.-G., & Principe, J. C. (2018). Robust C-loss kernel classifiers. IEEE Transactions on Neural Networks and Learning Systems, 29(3), 510–522.MathSciNetCrossRefGoogle Scholar
  213. 213.
    Xu, L., Neufeld, J., Larson, B., & Schuurmans, D. (2004). Maximum margin clustering. Advances in neural information processing systems (Vol. 17). Cambridge: MIT Press.Google Scholar
  214. 214.
    Yang, H., Huang, K., King, I., & Lyu, M. R. (2009). Localized support vector regression for time series prediction. Neurocomputing, 72, 2659–2669.CrossRefGoogle Scholar
  215. 215.
    Yang, X., Lu, J., & Zhang, G. (2010). Adaptive pruning algorithm for least squares support vector machine classifier. Soft Computing, 14, 667–680.zbMATHCrossRefGoogle Scholar
  216. 216.
    Yu, H., Yang, J., Han, J., & Li, X. (2005). Making SVMs scalable to large data sets using hierarchical cluster indexing. Data Mining and Knowledge Discovery, 11, 295–321.MathSciNetCrossRefGoogle Scholar
  217. 217.
    Zeng, X. Y., & Chen, X. W. (2005). SMO-based pruning methods for sparse least squares support vector machines. IEEE Transactions on Neural Networks, 16(6), 1541–1546.CrossRefGoogle Scholar
  218. 218.
    Zhang, T., & Oles, F. J. (2001). Text categorization based on regularized linear classification methods. Information Retrieval, 4(1), 5–31.zbMATHCrossRefGoogle Scholar
  219. 219.
    Zhang, K., Tsang, I. W., & Kwok, J. T. (2009). Maximum margin clustering made practical. IEEE Transactions on Neural Networks, 20(4), 583–596.CrossRefGoogle Scholar
  220. 220.
    Zheng, J., & Lu, B.-L. (2011). A support vector machine classifier with automatic confidence and its application to gender classification. Neurocomputing, 74, 1926–1935.CrossRefGoogle Scholar
  221. 221.
    Zhou, S. (2016). Sparse LSSVM in primal ssing Cholesky factorization for large-scale problems. IEEE Transactions on Neural Networks and Learning Systems, 27(4), 783–795.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringConcordia UniversityMontrealCanada
  2. 2.Xonlink Inc.HangzhouChina

Personalised recommendations