Abstract
In this chapter, we discuss the support vector machine (SVM), an approach for classification that was developed in the computer science community in the 1990s and that has grown in popularity since then. SVMs have been shown to perform well in a variety of settings, and are often considered one of the best “out of the box” classifiers.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
hyperplane
The word affine indicates that the subspace need not pass through the origin.
- 2.
By expanding each of the inner products in (9.19), it is easy to see that f(x) is a linear function of the coordinates of x. Doing so also establishes the correspondence between the α i and the original parameters β j .
- 3.
With this hinge-loss + penalty representation, the margin corresponds to the value one, and the width of the margin is determined by \(\sum \beta _{j}^{2}\).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media New York
About this chapter
Cite this chapter
James, G., Witten, D., Hastie, T., Tibshirani, R. (2013). Support Vector Machines. In: An Introduction to Statistical Learning. Springer Texts in Statistics, vol 103. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7138-7_9
Download citation
DOI: https://doi.org/10.1007/978-1-4614-7138-7_9
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-7137-0
Online ISBN: 978-1-4614-7138-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)