A Little Learning Theory

  • David Forsyth


The key thing to know about a classifier is how well it will work on future test data. There are two cases to look at: how error on held-out training data predicts test error, and how training error predicts test error. Error on held-out training data is a very good predictor of test error. It’s worth knowing why this should be true, and Sect. 3.1 deals with that. Our training procedures assume that a classifier that achieves good training error is going to behave well on test—we need some reason to be confident that this is the case. It is possible to bound test error from training error. The bounds are all far too loose to have any practical significance, but their presence is reassuring.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • David Forsyth
    • 1
  1. 1.Computer Science DepartmentUniversity of Illinois Urbana ChampaignUrbanaUSA

Personalised recommendations