Introductory statistics courses often start with summary statistics, then develop a notion of probability, and finally turn to parametric models – mostly the normal – for inference. By the end of the course, the student has seen estimation and hypothesis testing for means, proportions, ANOVA, and maybe linear regression. This is a good approach for a first encounter with statistical thinking. The student who goes on takes a familiar series of courses: survey sampling, regression, Bayesian inference, multivariate analysis, nonparametrics and so forth, up to the crowning glories of decision theory, measure theory, and asymptotics. In aggregate, these courses develop a view of statistics that continues to provide insights and challenges.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2009 Springer-Verlag New York
About this chapter
Cite this chapter
Clarke, B., Fokoué, E., Zhang, H.H. (2009). Variability, Information, and Prediction. In: Principles and Theory for Data Mining and Machine Learning. Springer Series in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-0-387-98135-2_1
Download citation
DOI: https://doi.org/10.1007/978-0-387-98135-2_1
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98134-5
Online ISBN: 978-0-387-98135-2
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)