Abstract
In a linear regression model, the criterion is modeled as a linear combination of the weighted predictors and a disturbance term.
The model gives rise to two, related properties: linearity and additivity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Hereafter, the phrase “linear in the variables” will be used to describe a linear relation and the phrase “linear in the parameters” will be used to describe a linear model or function.
- 2.
In some textbooks, nonlinear functions that can be linearized are called “intrinsically linear functions.”
- 3.
I am using v to denote the error term because later we are going to use e to denote exponential functions.
- 4.
Log transformations cannot be made on negative numbers or zero. To accommodate this issue, some authors suggest adding a constant to all scores before using a log transformation. This practice is not easy to justify and should be avoided in favor of nonlinear regression models.
- 5.
With a logged predictor, the intercept represents the fitted value of y when x = 1. Whether this value is meaningful will depend on the sample.
- 6.
We use a negative value for b to model an exponential decay curve.
- 7.
People commonly refer to tasks that are difficult to master as having a “steep learning curve,” but this is incorrect. Easy tasks produce a steep learning curve, as was the case in our earlier example when performance rose quickly with practice, then leveled off. In contrast, difficult tasks produce shallow learning curves, like the function we are considering in this example.
- 8.
Economists use power functions to describe the elasticity of a commodity, defined as the expected percentage change in demand with a 1 % change in price. When b is < 2, the regression coefficient from the transformed analysis provides a good approximation of elasticity; when b is > 2, elasticity should be computed from the data: E = (1.01b * 100) − 100.
- 9.
We are setting λ o = 1 for our initial value (i.e., x 1 = x).
- 10.
λ ~ 0 indicates that a log transformation of the predictor is appropriate, and λ ~ 1 indicates that no transformation is needed.
- 11.
With a single predictor, nonparametric smoothers are called scatterplot smoothers.
- 12.
See also Fox (2000, p. 23).
- 13.
If you are thinking that this procedure sounds a lot like the Newey-West technique we learned in Chap. 7, you are absolutely right. The Newey-West technique uses a kernel known as the Bartlett kernel.
References
Box, G. E. P., & Tidwell, P. W. (1962). Transformation of the independent variables. Technometrics, 4, 531–550.
Buskirk, T. D., Willoughby, L. M., & Tomazic, T. J. (2013). Nonparametric statistical techniques. In T. D. Little (Ed.), The Oxford handbook of quantitative methods in psychology (Statistical analysis, Vol. 2, pp. 106–141). New York: Oxford University Press.
Cleveland, W. S. (1979). Robust locally weighted regression and smoothing scatterplots. Journal of the American Statistical Association, 74, 829–836.
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. Mahwah: Erlbaum.
Fox, J. (2000). Nonparametric simple regression: Smoothing scatterplots. Thousand Oaks: Sage.
Manning, W. G., & Mullahy, J. (2001). Estimating log models: To transform or not to transform? Journal of Health Economics, 20, 461–494.
Mosteller, F., & Tukey, J. W. (1977). Data analysis and regression: A second course in statistics. New York: Pearson.
Xiao, X., White, E. P., Hooten, M. B., & Durham, S. L. (2011). On the use of log-transformation vs. nonlinear regression for analyzing power laws. Ecology, 92, 1887–1894.
Author information
Authors and Affiliations
8.1 Electronic Supplementary Material
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Brown, J.D. (2014). Linearizing Transformations and Nonparametric Smoothers. In: Linear Models in Matrix Form. Springer, Cham. https://doi.org/10.1007/978-3-319-11734-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-319-11734-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11733-1
Online ISBN: 978-3-319-11734-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)