Skip to main content

Data Mining within a Regression Framework

  • Chapter
  • First Online:
Data Mining and Knowledge Discovery Handbook
  • 16k Accesses

Summary

Regression analysis can imply a far wider range of statistical procedures than often appreciated. In this chapter, a number of common Data Mining procedures are discussed within a regression framework. These include non-parametric smoothers, classification and regression trees, bagging, and random forests. In each case, the goal is to characterize one or more of the distributional features of a response conditional on a set of predictors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 349.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Berk, R.A. (2003) Regression Analysis: A Constructive Critique. Newbury Park, CA.: Sage Publications.

    Google Scholar 

  • Berk, R.A., Ladd, H., Graziano, H., and J. Baek (2003) “A Randomized Experiment Testing Inmate Classification Systems,” Journal of Criminology and Public Policy, 2, No. 2: 215-242.

    Article  Google Scholar 

  • Breiman, L., Friedman, J.H., Olshen, R.A., and C.J. Stone, (1984) Classification and Regression Trees. Monterey, Ca: Wadsworth Press.

    MATH  Google Scholar 

  • Breiman, L. (1996) “Bagging Predictors.” Machine Learning 26:123-140.

    Google Scholar 

  • Breiman, L. (2000) “Some Infinity Theory for Predictor Ensembles.” Technical Report 522, Department of Statistics, University of California, Berkeley, California.

    Google Scholar 

  • Breiman, L. (2001a) “Random Forests.” Machine Learning 45: 5-32.

    Article  MATH  Google Scholar 

  • Breiman, L. (2001b) “Statistical Modeling: Two Cultures,” (with discussion) Statistical Science 16: 199-231.

    Article  MATH  MathSciNet  Google Scholar 

  • Cleveland, W. (1979) “Robust Locally Weighted Regression and Smoothing Scatterplots.” Journal of the American Statistical Association 78: 829-836.

    Article  MathSciNet  Google Scholar 

  • Cook, D.R. and Sanford Weisberg (1999) Applied Regression Including Computing and Graphics. New York: John Wiley and Sons.

    Book  MATH  Google Scholar 

  • Dasu, T., and T. Johnson (2003) Exploratory Data Mining and Data Cleaning. New York: John Wiley and Sons.

    Book  MATH  Google Scholar 

  • Christianini, N and J. Shawe-Taylor. (2000) Support Vector Machines. Cambridge, England: Cambridge University Press.

    Google Scholar 

  • Fan, J., and I. Gijbels. (1996) Local Polynomial Modeling and its Applications. New York: Chapman & Hall.

    Google Scholar 

  • Friedman, J., Hastie, T., and R. Tibsharini (2000). “Additive Logistic Regression: A Statistical View of Boosting” (with discussion). Annals of Statistics 28: 337-407.

    Article  MATH  MathSciNet  Google Scholar 

  • Freund, Y., and R. Schapire. (1996) “Experiments with a New Boosting Algorithm,” Machine Learning: Proceedings of the Thirteenth International Conference: 148-156. San Francisco: Morgan Freeman

    Google Scholar 

  • Gigi, A. (1990) Nonlinear Multivariate Analysis. New York: John Wiley and Sons.

    Google Scholar 

  • Hand, D., Manilla, H., and P Smyth (2001) Principle of Data Mining. Cambridge, Massachusetts: MIT Press.

    Google Scholar 

  • Hastie, T.J. and R.J. Tibshirani. (1990) Generalized Additive Models. New York: Chapman & Hall.

    MATH  Google Scholar 

  • Hastie, T., Tibshirani, R. and J. Friedman (2001) The Elements of Statistical Learning. New York: Springer-Verlag.

    MATH  Google Scholar 

  • LeBlanc, M., and R. Tibshirani (1996) “Combining Estimates on Regression and Classification.” Journal of the American Statistical Association 91: 1641–1650.

    Article  MATH  MathSciNet  Google Scholar 

  • Loader, C. (1999) Local Regression and Likelihood. New York: Springer–Verlag.

    MATH  Google Scholar 

  • Loader, C. (2004) “Smoothing: Local Regression Techniques,” in J. Gentle, W. Härdle, and Y. Mori, Handbook of Computational Statistics. NewYork: Springer-Verlag.

    Google Scholar 

  • Mocan, H.N. and K. Gittings (2003) “Getting off Death Row: Commuted Sentences and the Deterrent Effect of Capital Punishment.” (Revised version of NBER Working Paper No. 8639) and forthcoming in the Journal of Law and Economics.

    Google Scholar 

  • Mojirsheibani, M. (1999) “Combining Classifiers vis Discretization.” Journal of the American Statistical Association 94: 600-609.

    Article  MATH  MathSciNet  Google Scholar 

  • Reunanen, J. (2003) “Overfitting in Making Comparisons between Variable Selection Methods.” Journal of Machine Learning Research 3: 1371-1382.

    Article  MATH  Google Scholar 

  • Sutton, R.S., and A.G. Barto. (1999). Reinforcement Learning. Cambridge, Massachusetts: MIT Press.

    Google Scholar 

  • Svetnik, V., Liaw, A., and C.Tong. (2003) “Variable Selection in Random Forest with Application to Quantitative Structure-Activity Relationship.” Working paper, Biometrics Research Group, Merck & Co., Inc.

    Google Scholar 

  • Vapnik, V. (1995) The Nature of Statistical Learning Theory. New York: Springer-Verlag.

    MATH  Google Scholar 

  • Witten, I.H. and E. Frank. (2000). Data Mining. New York: Morgan and Kaufmann.

    Google Scholar 

  • Wood, S.N. (2004) “Stable and Eficient Multiple Smoothing Parameter Estimation for Generalized Additive Models,” Journal of the American Statistical Association, Vol. 99, No. 467: 673-686.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The final draft of this chapter was funded in part by a grant from the National Science Foundation: (SES -0437169) ”Ensemble methods for Data Analysis in the Behavioral, Social and Economic Sciences.” This chapter was completed while visiting at the Department of Earth, Atmosphere, and Oceans, at the Ecole Normale Supérieur in Paris. Support from both is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Richard A. Berk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Berk, R.A. (2009). Data Mining within a Regression Framework. In: Maimon, O., Rokach, L. (eds) Data Mining and Knowledge Discovery Handbook. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09823-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-0-387-09823-4_11

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-387-09822-7

  • Online ISBN: 978-0-387-09823-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics