Weak Convergence and Empirical Processes pp 412-422 | Cite as

# Convolution and Minimax Theorems

## Abstract

Let *H* be a linear subspace of a Hilbert space with inner product (·, ·) and norm I · II. For each *n* ∈ N and *h* ∈ *H*, let *P* _{ n,h } be a probability measure on a measurable space (*X* _{ n }, *A* _{ n }). Consider the problem of estimating a “parameter” *k* _{ n }(*h*) given an “observation” *X* _{ n } with law *P* _{ n,h }. The convolution theorem and the minimax theorem give a lower bound on how well *k* _{ n }(*h*) can be estimated asymptotically as ↦ ∞. Suppose the sequence of statistical experiments (*X* _{n}, *A* _{ n }, *P* _{ n,h }: *h* ∈ *H*) is “asymptotically normal” and the sequence of parameters is “regular”. Then the limit distribution of every “regular” estimator sequence is the convolution of a certain Gaussian distribution and a noise factor. Furthermore, the maximum risk of any estimator sequence is bounded below by the “risk” of this Gaussian distribution. These concepts are defined as follows.

## Keywords

Loss Function Coordinate Projection Separable Banach Space Estimator Sequence Brownian Bridge## Preview

Unable to display preview. Download preview PDF.