Summary
Regression analysis can imply a far wider range of statistical procedures than often appreciated. In this chapter, a number of common Data Mining procedures are discussed within a regression framework. These include non-parametric smoothers, classification and regression trees, bagging, and random forests. In each case, the goal is to characterize one or more of the distributional features of a response conditional on a set of predictors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Berk, R.A. (2003) Regression Analysis: A Constructive Critique. Newbury Park, CA.: Sage Publications.
Berk, R.A., Ladd, H., Graziano, H., and J. Baek (2003) “A Randomized Experiment Testing Inmate Classification Systems,” Journal of Criminology and Public Policy, 2, No. 2: 215-242.
Breiman, L., Friedman, J.H., Olshen, R.A., and C.J. Stone, (1984) Classification and Regression Trees. Monterey, Ca: Wadsworth Press.
Breiman, L. (1996) “Bagging Predictors.” Machine Learning 26:123-140.
Breiman, L. (2000) “Some Infinity Theory for Predictor Ensembles.” Technical Report 522, Department of Statistics, University of California, Berkeley, California.
Breiman, L. (2001a) “Random Forests.” Machine Learning 45: 5-32.
Breiman, L. (2001b) “Statistical Modeling: Two Cultures,” (with discussion) Statistical Science 16: 199-231.
Cleveland, W. (1979) “Robust Locally Weighted Regression and Smoothing Scatterplots.” Journal of the American Statistical Association 78: 829-836.
Cook, D.R. and Sanford Weisberg (1999) Applied Regression Including Computing and Graphics. New York: John Wiley and Sons.
Dasu, T., and T. Johnson (2003) Exploratory Data Mining and Data Cleaning. New York: John Wiley and Sons.
Christianini, N and J. Shawe-Taylor. (2000) Support Vector Machines. Cambridge, England: Cambridge University Press.
Fan, J., and I. Gijbels. (1996) Local Polynomial Modeling and its Applications. New York: Chapman & Hall.
Friedman, J., Hastie, T., and R. Tibsharini (2000). “Additive Logistic Regression: A Statistical View of Boosting” (with discussion). Annals of Statistics 28: 337-407.
Freund, Y., and R. Schapire. (1996) “Experiments with a New Boosting Algorithm,” Machine Learning: Proceedings of the Thirteenth International Conference: 148-156. San Francisco: Morgan Freeman
Gigi, A. (1990) Nonlinear Multivariate Analysis. New York: John Wiley and Sons.
Hand, D., Manilla, H., and P Smyth (2001) Principle of Data Mining. Cambridge, Massachusetts: MIT Press.
Hastie, T.J. and R.J. Tibshirani. (1990) Generalized Additive Models. New York: Chapman & Hall.
Hastie, T., Tibshirani, R. and J. Friedman (2001) The Elements of Statistical Learning. New York: Springer-Verlag.
LeBlanc, M., and R. Tibshirani (1996) “Combining Estimates on Regression and Classification.” Journal of the American Statistical Association 91: 1641–1650.
Loader, C. (1999) Local Regression and Likelihood. New York: Springer–Verlag.
Loader, C. (2004) “Smoothing: Local Regression Techniques,” in J. Gentle, W. Härdle, and Y. Mori, Handbook of Computational Statistics. NewYork: Springer-Verlag.
Mocan, H.N. and K. Gittings (2003) “Getting off Death Row: Commuted Sentences and the Deterrent Effect of Capital Punishment.” (Revised version of NBER Working Paper No. 8639) and forthcoming in the Journal of Law and Economics.
Mojirsheibani, M. (1999) “Combining Classifiers vis Discretization.” Journal of the American Statistical Association 94: 600-609.
Reunanen, J. (2003) “Overfitting in Making Comparisons between Variable Selection Methods.” Journal of Machine Learning Research 3: 1371-1382.
Sutton, R.S., and A.G. Barto. (1999). Reinforcement Learning. Cambridge, Massachusetts: MIT Press.
Svetnik, V., Liaw, A., and C.Tong. (2003) “Variable Selection in Random Forest with Application to Quantitative Structure-Activity Relationship.” Working paper, Biometrics Research Group, Merck & Co., Inc.
Vapnik, V. (1995) The Nature of Statistical Learning Theory. New York: Springer-Verlag.
Witten, I.H. and E. Frank. (2000). Data Mining. New York: Morgan and Kaufmann.
Wood, S.N. (2004) “Stable and Eficient Multiple Smoothing Parameter Estimation for Generalized Additive Models,” Journal of the American Statistical Association, Vol. 99, No. 467: 673-686.
Acknowledgments
The final draft of this chapter was funded in part by a grant from the National Science Foundation: (SES -0437169) ”Ensemble methods for Data Analysis in the Behavioral, Social and Economic Sciences.” This chapter was completed while visiting at the Department of Earth, Atmosphere, and Oceans, at the Ecole Normale Supérieur in Paris. Support from both is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Berk, R.A. (2009). Data Mining within a Regression Framework. In: Maimon, O., Rokach, L. (eds) Data Mining and Knowledge Discovery Handbook. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09823-4_11
Download citation
DOI: https://doi.org/10.1007/978-0-387-09823-4_11
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-09822-7
Online ISBN: 978-0-387-09823-4
eBook Packages: Computer ScienceComputer Science (R0)