Some Basic Ideas of Smoothing

  • Jeffrey D. Hart
Part of the Springer Series in Statistics book series (SSS)

Abstract

In its broadest sense, smoothing is the very essence of statistics. To smooth is to sand away the rough edges from a set of data. More precisely, the aim of smoothing is to remove data variability that has no assignable cause and to thereby make systematic features of the data more apparent. In recent years the term smoothing has taken on a somewhat more specialized meaning in the statistical literature. Smoothing has become synonomous with a variety of nonparametric methods used in the estimation of functions, and it is in this sense that we shall use the term. Of course, a primary aim of smoothing in this latter sense is still to reveal interesting data features. Some major accounts of smoothing methods in various contexts may be found in Priestley (1981), Devroye and Györfi (1985), Silverman (1986), Eubank (1988), Härdle (1990), Wahba (1990), Scott (1992), Tarter and Lock (1993), Green and Silverman (1994), Wand and Jones (1995) and Fan and Gijbels (1996).

Keywords

Fourier Series Kernel Estimate Kernel Estimator Bottom Graph Series Estimator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Jeffrey D. Hart
    • 1
  1. 1.Department of StatisticsTexas A&M UniversityCollege StationUSA

Personalised recommendations