Part of the Springer Texts in Statistics book series (STS)
This book is about linear models. Linear models are models that are linear in the parameters. A typical model considered is
where Y is an n × 1 vector of observations, X is an n × p matrix of known constants called the design matrix, ß is a p × 1 vector of unobservable parameters, and e is an n × 1 vector of unobservable random errors. Both Y and e are random vectors. We assume that E(e) = 0 and Cov(e) = σ 2 I where σ 2 is some unknown parameter. (The operations E(·) and Cov(·) will be formally defined a bit later.) Our object is to explore models that can be used to predict future observable events. Much of our effort will be devoted to drawing inferences, in the form of point estimates, tests, and confidence regions, about the parameters β and σ 2. In order to get tests and confidence regions we will assume that e has an n-dimensional normal distribution with mean vector (0, 0, ..., 0)′ and covariance matrix σ 2 I, i.e., e ~ N(0, σ 2 I).
$$Y = X\beta + e$$
KeywordsCovariance Matrix Quadratic Form Random Vector Confidence Region Multivariate Normal Distribution
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Unable to display preview. Download preview PDF.
© Springer Science+Business Media New York 1987