Learning Regression Analysis by Simulation pp 109-162 | Cite as

# Simple Regression

Chapter

First Online:

## Abstract

When the data \(\{(x_{i},y_{i})\}\) (1 ≤ where \((y_{i} - a_{0} - a_{1}x_{i})\) ( = where Values such as

*i*≤*n*) are given,*a*_{0}and*a*_{1}are derived by minimizing the residual sum of squares (*RSS*) in a procedure called a simple regression:$$\displaystyle{ RSS =\sum _{ i=1}^{n}{(y_{ i} - a_{0} - a_{1}x_{i})}^{2} =\sum _{ i=1}^{n}e_{ i}^{2}, }$$

*e*_{ i }) is a residual. This process yields the regression equation:$$\displaystyle{ y =\hat{ a}_{0} +\hat{ a}_{1}x, }$$

*a*_{0}is the intercept and*a*_{1}is the gradient (slope). Each data point is represented as$$\displaystyle{ y_{i} =\hat{ a}_{0} +\hat{ a}_{1}x_{i} + e_{i}. }$$

*a*_{0}and*a*_{1}are called regression coefficients. The “ \(\widehat{}\) ” (hat) of \(\hat{a}_{0}\) and \(\hat{a}_{1}\) indicates that these values are estimates.## Keywords

Null Hypothesis Regression Coefficient Simulation Data Probability Density Function Prediction Error
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## References

- 1.Myers RH (1990) Classical and modern regression with applications (Duxbury Classic). Duxbury, North ScituateGoogle Scholar
- 2.Ruppert D, Wand MP, Carroll RJ (2003) Semiparametric regression. Cambridge University Press, CambridgeGoogle Scholar
- 3.Ryan TP (1996) Modern regression methods. Wiley-Interscience, New YorkGoogle Scholar
- 4.Takezawa K (2006) Introduction to nonparametric regression. Wiley, New YorkGoogle Scholar
- 5.Takezawa K (2012) Guidebook to R graphics using Microsoft windows. Wiley, New YorkGoogle Scholar

## Copyright information

© Springer Japan 2014