Skip to main content

Nonparametric Regression

  • Chapter
Analysis of Neural Data

Part of the book series: Springer Series in Statistics ((SSS))

  • 5770 Accesses

Abstract

At the beginning of ChapterĀ 14 we said that modern regression applies models displayed in Eqs. (14.3) and (14.4).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In Section A.9 of the Appendix we give the definition of a basis for \(R^n\), which is an \(n\)-dimensional vector space. The basis function terminology refers to an extension of this idea to infinitely many dimensions: the functions \(f(x)\) on an interval \([a,b]\) that satisfy

    $$ \int _a^b f(x)dx < \infty $$

    (here the Lebesgue integral is used) form an infinite-dimensional vector space and if the functions \(B_j(x)\) form a basis then every \(f(x)\) may be written as

    $$ f(x)=\sum _{j=1}^{\infty } c_jB_j(x). $$
  2. 2.

    Because the span of the columns of the \(X\) matrix using \(B\)-splines will be the same as the span of \(X\) matrix using the orthogonalized power basis, the resulting least-squares estimated fits \(X\hat{\beta }\) will be the same in both cases.

  3. 3.

    One method, known as backfitting, cycles through the variables \(x_j\), using smoothing (here, spline smoothing) to fit the residuals from a regression on all other variables.

  4. 4.

    There remain upward trends in the residual plots. This is due to the penalized fitting, which induces correlation of residuals and fitted values.

  5. 5.

    The names Gabor and Morlet both get attached to what is perhaps more properly known as the Morlet wavelet, which has the form of a product of a normal pdf and a complex exponential, the real and imaginary parts of which are sinusoidal.

  6. 6.

    The terminology comes from spectral analysis (see SectionĀ 18.3.3) where the width corresponds to a band of frequencies.

  7. 7.

    A popular variation on this theme, called loess, modifies the weights so that large residuals (outliers) exert less influence on the fit. The terminology comes from the English meaning of loess, which is a silt-like sediment, and is derived from German word lƶss, which means ā€œloose.ā€

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert E. Kass .

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Kass, R.E., Eden, U.T., Brown, E.N. (2014). Nonparametric Regression. In: Analysis of Neural Data. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9602-1_15

Download citation

Publish with us

Policies and ethics