Abstract
In its broadest sense, smoothing is the very essence of statistics. To smooth is to sand away the rough edges from a set of data. More precisely, the aim of smoothing is to remove data variability that has no assignable cause and to thereby make systematic features of the data more apparent. In recent years the term smoothing has taken on a somewhat more specialized meaning in the statistical literature. Smoothing has become synonomous with a variety of nonparametric methods used in the estimation of functions, and it is in this sense that we shall use the term. Of course, a primary aim of smoothing in this latter sense is still to reveal interesting data features. Some major accounts of smoothing methods in various contexts may be found in Priestley (1981), Devroye and Györfi (1985), Silverman (1986), Eubank (1988), Härdle (1990), Wahba (1990), Scott (1992), Tarter and Lock (1993), Green and Silverman (1994), Wand and Jones (1995) and Fan and Gijbels (1996).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1997 Springer Science+Business Media New York
About this chapter
Cite this chapter
Hart, J.D. (1997). Some Basic Ideas of Smoothing. In: Nonparametric Smoothing and Lack-of-Fit Tests. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4757-2722-7_2
Download citation
DOI: https://doi.org/10.1007/978-1-4757-2722-7_2
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4757-2724-1
Online ISBN: 978-1-4757-2722-7
eBook Packages: Springer Book Archive