Abstract
In this chapter we discuss some simple and essentially model-free methods for classification and pattern recognition. Because they are highly unstructured, they typically aren’t useful for understanding the nature of the relationship between the features and class outcome. However, as black box prediction engines, they can be very effective, and are often among the best performers in real data problems. The nearest-neighbor technique can also be used in regression; this was touched on in Chapter 2 and works reasonably well for low-dimensional problems. However, with high-dimensional features, the bias—variance tradeoff does not work as favorably for nearest-neighbor regression as it does for classification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hastie, T., Friedman, J., Tibshirani, R. (2001). Prototype Methods and Nearest-Neighbors. In: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-21606-5_13
Download citation
DOI: https://doi.org/10.1007/978-0-387-21606-5_13
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4899-0519-2
Online ISBN: 978-0-387-21606-5
eBook Packages: Springer Book Archive