Abstract
The key thing to know about a classifier is how well it will work on future test data. There are two cases to look at: how error on held-out training data predicts test error, and how training error predicts test error. Error on held-out training data is a very good predictor of test error. It’s worth knowing why this should be true, and Sect. 3.1 deals with that. Our training procedures assume that a classifier that achieves good training error is going to behave well on test—we need some reason to be confident that this is the case. It is possible to bound test error from training error. The bounds are all far too loose to have any practical significance, but their presence is reassuring.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Forsyth, D. (2019). A Little Learning Theory. In: Applied Machine Learning . Springer, Cham. https://doi.org/10.1007/978-3-030-18114-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-18114-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18113-0
Online ISBN: 978-3-030-18114-7
eBook Packages: Computer ScienceComputer Science (R0)