Advertisement

Simple Regression

  • Kunio Takezawa
Chapter

Abstract

When the data \(\{(x_{i},y_{i})\}\) (1 ≤ in) are given, a 0 and a 1 are derived by minimizing the residual sum of squares (RSS) in a procedure called a simple regression:
$$\displaystyle{ RSS =\sum _{ i=1}^{n}{(y_{ i} - a_{0} - a_{1}x_{i})}^{2} =\sum _{ i=1}^{n}e_{ i}^{2}, }$$
where \((y_{i} - a_{0} - a_{1}x_{i})\) ( = e i ) is a residual. This process yields the regression equation:
$$\displaystyle{ y =\hat{ a}_{0} +\hat{ a}_{1}x, }$$
where a 0 is the intercept and a 1 is the gradient (slope). Each data point is represented as
$$\displaystyle{ y_{i} =\hat{ a}_{0} +\hat{ a}_{1}x_{i} + e_{i}. }$$
Values such as a 0 and a 1 are called regression coefficients. The “ \(\widehat{}\) ” (hat) of \(\hat{a}_{0}\) and \(\hat{a}_{1}\) indicates that these values are estimates.

Keywords

Null Hypothesis Regression Coefficient Simulation Data Probability Density Function Prediction Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Myers RH (1990) Classical and modern regression with applications (Duxbury Classic). Duxbury, North ScituateGoogle Scholar
  2. 2.
    Ruppert D, Wand MP, Carroll RJ (2003) Semiparametric regression. Cambridge University Press, CambridgeGoogle Scholar
  3. 3.
    Ryan TP (1996) Modern regression methods. Wiley-Interscience, New YorkGoogle Scholar
  4. 4.
    Takezawa K (2006) Introduction to nonparametric regression. Wiley, New YorkGoogle Scholar
  5. 5.
    Takezawa K (2012) Guidebook to R graphics using Microsoft windows. Wiley, New YorkGoogle Scholar

Copyright information

© Springer Japan 2014

Authors and Affiliations

  • Kunio Takezawa
    • 1
    • 2
  1. 1.National Agricultural and Food Research OrganizationTsukubaJapan
  2. 2.Graduate School of Life and Environmental SciencesUniversity of TsukubaTsukubaJapan

Personalised recommendations