Abstract
Support vector machines (SVMs) and Boosting are possibly the two most popular learning approaches during the past two decades. It is well known that the margin is a fundamental issue of SVMs, whereas recently the margin theory for Boosting has been defended, establishing a connection between these two mainstream approaches. The recent theoretical results disclosed that the margin distribution rather than a single margin is really crucial for the generalization performance, and suggested to optimize the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. Inspired by this recognition, we advocate the large margin distribution learning, a promising research direction that has exhibited superiority in algorithm designs to traditional large margin learning.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Aiolli, F., San Martino, G., Sperduti, A.: A kernel method for the optimization of the margin distribution. In: Proceedings of the 18th International Conference on Artificial Neural Networks, Prague, Czech, pp. 305–314 (2008)
Antos, A., Kégl, B., Linder, T., Lugosi, G.: Data-dependent margin-based generalization bounds for classification. Journal of Machine Learning Research 3, 73–98 (2002)
Breiman, L.: Prediction games and arcing classifiers. Neural Computation 11(7), 1493–1517 (1999)
Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20(3), 273–297 (1995)
Gao, W., Zhou, Z.-H.: On the doubt about margin explanation of boosting. Artificial Intelligence 199-200, 22–44 (2013) (arXiv:1009.3613, September 2010)
Garg, A., Roth, D.: Margin distribution and learning algorithms. In: Proceedings of the 20th International Conference on Machine Learning, Washington, DC, pp. 210–217 (2003)
Grove, A.J., Schuurmans, D.: Boosting in the limit: Maximizing the margin of learned ensembles. In: Proceedings of the 15th National Conference on Artificial Intelligence, Menlo Park, CA, pp. 692–699 (1998)
Kearns, M., Valiant, L.G.: Cryptographic limitations on learning boolean formulae and finite automata. In: Proceedings of the 21st Annual ACM Symposium on Theory of Computing, Seattle, WA, pp. 433–444 (1989)
Koltchinskii, L., Panchanko, D.: Empirical margin distributions and bounding the generalization error of combined classifiers. Annuals of Statistics 30(1), 1–50 (2002)
Koltchinskii, L., Panchanko, D.: Complexities of convex combinations and bounding the generalization error in classification. Annuals of Statistics 33(4), 1455–1496 (2005)
Pelckmans, K., Suykens, J., Moor, B.D.: A risk minimization principle for a class of parzen estimators. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S. (eds.) Advances in Neural Information Processing Systems 20, pp. 1137–1144. MIT Press, Cambridge (2008)
Reyzin, L., Schapire, R.E.: How boosting the margin can also boost classifier complexity. In: Proceeding of 23rd International Conference on Machine Learning, Pittsburgh, PA, pp. 753–760 (2006)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)
Schapire, R.E., Freund, Y., Bartlett, P.L., Lee, W.S.: Boosting the margin: A new explanation for the effectives of voting methods. Annuals of Statistics 26(5), 1651–1686 (1998)
Shawe-Taylor, J., Williamson, R.C.: Generalization performance of classifiers in terms of observed covering numbers. In: Proceedings of the 4th European Conference on Computational Learning Theory, Nordkirchen, Germany, pp. 153–167 (1999)
Shen, C., Li, H.: Boosting through optimization of margin distributions. IEEE Transactions on Neural Networks 21(4), 659–666 (2010)
Shivaswamy, P.K., Jebara, T.: Variance penalizing AdaBoost. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F.C.N., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 24, pp. 1908–1916. MIT Press, Cambridge (2011)
Smola, A.J., Bartlett, P.L., Schölkopf, B., Schuurmans, D. (eds.): Advances in Large Margin Classifiers. MIT Press, Cambridge (2000)
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)
Wang, L., Sugiyama, M., Yang, C., Zhou, Z.-H., Feng, J.: On the margin explanation of boosting algorithm. In: Proceedings of the 21st Annual Conference on Learning Theory, Helsinki, Finland, pp. 479–490 (2008)
Zhang, T., Zhou, Z.-H.: Large margin distribution machine. In: Proceedings of the 20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY (2014)
Zhou, Z.-H.: Ensemble Methods: Foundations and Algorithms. CRC Press, Boca Raton (2012)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Zhou, ZH. (2014). Large Margin Distribution Learning. In: El Gayar, N., Schwenker, F., Suen, C. (eds) Artificial Neural Networks in Pattern Recognition. ANNPR 2014. Lecture Notes in Computer Science(), vol 8774. Springer, Cham. https://doi.org/10.1007/978-3-319-11656-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-11656-3_1
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11655-6
Online ISBN: 978-3-319-11656-3
eBook Packages: Computer ScienceComputer Science (R0)