Advertisement

Mathematical and Statistical Preliminaries

  • James E. Gentle
Chapter
Part of the Statistics and Computing book series (SCO)

Abstract

The purpose of an exploration of data may be rather limited, and it may be ad hoc, or the purpose may be more general, perhaps to gain understanding of some natural phenomenon. The questions addressed in the data exploration may be somewhat openended. The process of understanding often begins with general questions about the structure of the data. At any stage of the analysis, our understanding is facilitated by means of a model. A model is a description that embodies our current understanding of a phenomenon. In an operational sense, we can formulate a model either as a description of a data-generating process, or as a prescription for processing data. The model is often expressed as a set of equations that relate data elements to each other. It may include probability distributions for the data elements. If any of the data elements are considered to be realizations of random variables, the model is a stochastic model. A model should not limit our analysis; rather, the model should be able to evolve. The process of understanding involves successive refinements of the model. The refinements proceed from vague models to more specific ones. An exploratory data analysis may begin by mining the data to identify interesting properties. These properties generally raise questions that are to be explored further. A class of models may have a common form within which the members of the class are distinguished by values of parameters. For example, the class of normal probability distributions has a single form of a probability density function that has two parameters. Within this family of probability distributions, these two parameters completely characterize the distributional properties. If this form of model is chosen to represent the properties of a dataset, we may seek confidence intervals for values of the two parameters or perform statistical tests of hypothesized values of these two parameters. In models that are not as mathematically tractable as the normal probability model—and many realistic models are not—we may need to use compu-tationally intensive methods involving simulations, resamplings, and multiple views to make inferences about the parameters of a model. These methods are part of the field of computational statistics.

Keywords

Mean Square Error Probability Density Function Maximum Likelihood Estimate Cumulative Distribution Function Statistical Inference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag New York 2009

Authors and Affiliations

  1. 1.Department of Computational & Data SciencesGeorge Mason UniversityFairfaxUSA

Personalised recommendations