Skip to main content

Fundamentals of Robust Parameter Estimation

  • Chapter
  • First Online:
Robust Digital Processing of Speech Signals

Abstract

The research of parameter estimation in different models of real systems resulted in the development of a number of algorithms possessing theoretically optimal properties regarding a chosen criterion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Branko Kovačević .

Appendices

Appendix 1—Analysis of Asymptotic Properties of Non-recursive Minimax Robust Estimation of Signal Amplitude

Asymptotic properties of non-recursive estimation of parameters of signal (amplitude) are defined by Theorems 3.1 and 3.2. The proof of these theorems is given in this part.

Without going into details that can be found in Huber’s papers, let us roughly sketch the proof of Theorem 3.1 The optimal estimations θ n in (3.23) satisfy the necessary solution for the minimum of the adopted criterion

$$\frac{\partial }{\partial \theta }\left. {\sum\limits_{i = 1}^{n} {F\left[ {y(i) - \theta } \right]} } \right|_{{\theta = \theta_{n} }} = \sum\limits_{i = 1}^{n} {\psi \left[ {y(i) - \theta_{n} } \right]} = 0,$$
(3.184)

where \(\psi ( \cdot ) = F^{{\prime }} ( \cdot )\) is a nonlinear influence function. If further one expands the function \(\psi ( \cdot )\) into a Taylor series around the accurate value of the parameter θ *, retaining only the first two terms of the expansion, the above equation reduces to

$$\sum\limits_{i = 1}^{n} {\left\{ {\psi \left[ {y(i) - \theta^{*} } \right] + \psi '\left[ {y(i) - \theta^{*} } \right]\,\left[ {\theta_{n} - \theta^{*} } \right]} \right\} = 0} ,$$
(3.185)

whence it follows that

$$\sqrt n \left( {\theta_{n} - {\theta}^{*} } \right) = - \frac{{\sum\limits_{i = 1}^{n} {\psi \left[ {y(i) - {\theta}^{*} } \right]/\sqrt n } }}{{\sum\limits_{i = 1}^{n} {\psi^{{\prime }} \left[ {y(i) - {\theta}^{*} } \right]/n} }},$$
(3.186)

Further it can be written that

$$\sqrt n \left( {\theta_{n} - {\theta}^{*} } \right) = - \,\frac{{\sum\limits_{i = 1}^{n} {\psi \left[ {y(i) - {\theta}^{*} } \right]/\sqrt n } }}{{{\beta^{{\prime }} ({\theta}^{*} )} }},$$
(3.187)

since according to the law of large numbers

$$\sum\limits_{i = 1}^{n} {\psi^{{\prime }} \left[ {y(i) - \theta^{*} } \right]/n} \mathop \to \limits_{n \to \infty } \beta^{\prime} (\theta^{*} ),$$
(3.188)

It also follows from the law of large numbers that

$$\sum\limits_{i = 1}^{n} {\psi^{2} \left[ {y(i) - \theta^{*} } \right]/n} \mathop \to \limits_{n \to \infty } \alpha (\theta^{*} ),$$
(3.189)

Bearing in mind the propositions P1–P3 it is concluded that

$$E\left\{ {\psi (z - {\theta}^{*} )} \right\} = 0,$$
(3.190)

and

$$\begin{aligned} E\left\{ {n(\theta_{n} - \theta^{*} )^{2} } \right\} & = \frac{1}{{\beta^{{{\prime }\,2}} (\theta^{*} )}}E\left\{ {\frac{1}{n}\sum\limits_{i = 1}^{n} {\psi^{2} \left( {y(i) - \theta^{*} } \right)} } \right. +\\ & \quad \left. { + \frac{2}{n}\sum\limits_{i} {\sum\limits_{j,{ } j \ne i} {\psi \left( {y(i) - \theta^{*} } \right)} } \psi \left( {y(j) - \theta^{*} } \right)} \right\} = \frac{{\alpha (\theta^{*} )}}{{\beta^{{{\prime }\,2}} (\theta^{*} )}}, \\ \end{aligned}$$
(3.191)

The last expression stems from the fact that

$$E\left\{ {\psi \left( {y(i) - \theta^{*} } \right)\,\psi \left( {y(j) - \theta^{*} } \right)} \right\} = 0.$$
(3.192)

Thus the random variable \(\sqrt n (\theta_{n} - \theta^{*} )\) has a zero mean value, a variance defined by expression (3.19), and based on the central limit theorem of statistics it converges over its distribution to the Gaussian random variable, which concludes the proof of Theorem 3.1.

The proof of Theorem 3.2 is given in the further text.

According to the propositions P7 and P8, the estimation θ n , obtained from the condition (3.23), almost certainly converges to the real value of the parameter θ, and the estimation error has asymptotically the normal distribution of the zero mean value and an asymptotic variance \(V(F,p_{\xi } )\) defined by expression (3.24). If one assumes that the probability distribution density of the perturbation is \(p_{0} ( \cdot )\) and adopts \(F_{0} ( \cdot )\) for the criterion function in (3.23), then it follows from (3.26) that \(V(F_{0} ,p_{0} ) = I^{ - 1} (p_{0} )\), i.e., the asymptotic variance of error \(V(F_{0} ,p_{0} )\) reaches the Cramér–Rao lower limit \(I^{ - 1} (p_{0} )\). According to the Cramér–Rao inequality (3.17) it follows further that for an arbitrary criterion function \(F( \cdot )\) in (3.23) one has \(V(F,p_{0} ) \ge V(F_{0} ,p_{0} )\), which proves the right side of the double inequality (3.30).

Instead of the left side of inequality (3.30) let us prove the equivalent inequality \(V^{ - 1} (F_{0} ,p_{\xi } ) \ge V^{ - 1} (F_{0} ,p_{0} )\) for each \(p_{\xi } \in P\), where \(p_{0} ( \cdot )\) is the least favorable density within within the class P, i.e., \(I(p_{0} ) \le I(p_{\xi } )\).

Let us denote by \(P_{\xi }\) the distribution function \((1 - \varepsilon )P_{0} + \varepsilon P_{1}\), i.e.,

$$P_{\varepsilon } = \left( {1 - \varepsilon } \right)P_{0} + \varepsilon P_{1} ;\quad p_{\varepsilon } = \left( {1 - \varepsilon } \right)p_{0} + \varepsilon p_{1} ;\quad 0 \le \varepsilon < 1,$$
(3.193)

where \(P_{\varepsilon }\) is an arbitrary distribution within the class P, and \(p_{\varepsilon }\) is the corresponding density. Since P 0 and P 1 are symmetric convex functions, \(P_{\varepsilon }\) is also a symmetric convex function, i.e., \(P_{\varepsilon } \in P\).

Let us introduce the notation

$$I(\varepsilon ) = I(p_{\varepsilon } ) = \int\limits_{ - \infty }^{\infty } {\frac{{\left[ {\left( {1 - \varepsilon } \right)p_{0}^{{\prime }} + \varepsilon p_{1}^{{\prime }} } \right]}}{{\left( {1 - \varepsilon } \right)p_{0} + \varepsilon p_{1} }}{{d}}z} ,$$
(3.194)

and

$$ \begin{aligned} Q(\varepsilon ) & = Q(p_{\varepsilon } ) = V^{ - 1} (\psi_{0} ,p_{\varepsilon } ) = \\ & = \frac{{\left[ {(1 - \varepsilon )\int\limits_{ - \infty }^{\infty } {\psi_{0}^{{\prime }} (z)p_{0} (z){d}z + \varepsilon \int\limits_{ - \infty }^{\infty } {\psi_{0}^{{\prime }} (z)p_{1} (z){d}z} } } \right]^{2} }}{{\left( {1 - \varepsilon } \right)\int\limits_{ - \infty }^{\infty } {\psi_{0}^{2} (z)p_{0} (z){d}z + \varepsilon \int\limits_{ - \infty }^{\infty } {\psi_{0}^{2} (z)p_{1} (z){d}z} } }}. \\ \end{aligned} $$
(3.195)

Then \(Q(0) = V^{ - 1} (\psi_{0} ,p_{0} )\) and the left side of the inequality (3.30) reduces to \(Q(0) \le Q(\varepsilon )\).

If \(Q(\varepsilon )\) is a convex function, instead of the inequality \(Q(0) \le Q(\varepsilon )\) it is sufficient to prove that \(\left. {\frac{{{d}Q(\varepsilon )}}{{{d}\varepsilon }}} \right|_{\varepsilon = 0} \ge 0\).

The last condition means that the function \(Q(\varepsilon )\) will be either monotonically increasing or will have a global minimum in the point \(\varepsilon = 0\).

Let us prove that the functions \(Q( \cdot )\) and \(I( \cdot )\) are convex. Indeed, if one introduces into (3.195) the notation \(u_{1} = \int {\psi_{0}^{{\prime }} p_{1}{d}z}\), \(u_{2} = \int {\psi_{0}^{{\prime }} p_{0}{d}z}\), \(v_{1} = \int {\psi_{0}^{2} p_{1}{d}z}\), \(v_{2} = \int {\psi_{0}^{2} p_{0} {d}z}\), then

$$\begin{aligned} Q(\varepsilon ) & = \frac{{\left[ {\varepsilon u_{1} + \left( {1 - \varepsilon } \right)u_{2} } \right]^{2} }}{{\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} }}= \\ & = \left[ {\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} } \right]\,\left[ {\frac{{\varepsilon v_{1} }}{{\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} }}\,\frac{{u_{1} }}{{v_{1} }} + \left( {1 - \frac{{\varepsilon v_{1} }}{{\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} }}} \right)\frac{{u_{1} }}{{v_{1} }}} \right]^{2} \le\\ & \quad \le \left[ {\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} } \right]\,\left[ {\frac{{\varepsilon v_{1} }}{{\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} }}\,\frac{{u_{1}^{2} }}{{v_{1}^{2} }} + \left( {1 - \frac{{\varepsilon v_{1} }}{{\varepsilon v_{1} + \left( {1 - \varepsilon } \right)v_{2} }}} \right)\frac{{u_{1}^{2} }}{{v_{1}^{2} }}} \right]^{2} =\\ & = \varepsilon \frac{{u_{1}^{2} }}{{v_{1} }} + \left( {1 - \varepsilon } \right)\frac{{u_{2}^{2} }}{{v_{2} }}. \\ \end{aligned}$$
(3.196)

In this manner it is shown, bearing in mind the definition (3.195) and the above given notation, that

$$Q(p_{\varepsilon } ) = Q\left[ {\left( {1 - \varepsilon } \right)p_{0} + \varepsilon p_{1} } \right] \le \left( {1 - \varepsilon } \right)Q(p_{0} ) + \varepsilon Q(p_{1} ).$$
(3.197)

Relation (3.197) is Jensen’s inequality and represents the necessary and sufficient condition of the convexity of the function \(Q( \cdot )\).

It can be shown in the same manner that

$$\frac{{p_{\varepsilon }^{{{\prime }\,2}} }}{{p_{\varepsilon } }} = \frac{{\left( {1 - \varepsilon } \right)p_{0}^{{\prime}{2}} + \varepsilon p_{1}^{{{\prime }2}} }}{{\left( {1 - \varepsilon } \right)p_{0} + \varepsilon p_{1} }} \le \left( {1 - \varepsilon } \right)\frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }} + \varepsilon\frac{{p_{1}^{{{\prime }2}} }}{{p_{1} }},$$
(3.198)

that is, the function \(p_{\varepsilon }^{{{\prime }\,2}} /p_{\varepsilon }\) is convex, so the same is valid for I(ε) as the integral of the convex function, i.e.

$$I(p_{\varepsilon } ) = I\left( {\left( {1 - \varepsilon } \right)p_{0} + \varepsilon p_{1} } \right) \le \left( {1 - \varepsilon } \right)I(p_{0} ) + \varepsilon I(p_{1} ).$$
(3.199)

If one further differentiates the relation (3.195) over ε, after changing the order of differentiation and integration, one obtains

$$\left. {\frac{{{{d}}Q(\varepsilon )}}{{{{d}}\varepsilon }}} \right|_{\varepsilon = 0} = \int\limits_{ - \infty }^{\infty } {\left[ { - 2\psi_{0} (z)g^{{\prime }} (z) - \psi_{0}^{2} (z)g(z)} \right]{{d}}z} ;\quad g = p_{1} - p_{0}$$
(3.200)

In the same manner it is shown that

$$\left. {\frac{{{{d}}I(\varepsilon )}}{{{{d}}\varepsilon }}} \right|_{\varepsilon = 0} = \int\limits_{ - \infty }^{\infty } {\left[ { - 2\psi_{0} (z)g^{{\prime }} (z) - \psi_{0}^{2} (z)g(z)} \right]{{d}}z} ;\quad g = p_{1} - p_{0} .$$
(3.201)

From (3.198) it follows further

$$\frac{1}{\varepsilon }\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right] \le \frac{{p_{1}^{{{\prime }\,2}} }}{{p_{1} }} - \frac{{p_{0}^{{{\prime }\,2}} }}{{p_{0} }}.$$
(3.202)

According to the proposition P7, the left side of (3.202) is integrable, so that

$$\frac{1}{\varepsilon }\int\limits_{ - \infty }^{\infty } {\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}}z \le \int\limits_{ - \infty }^{\infty } {\left[ {\frac{{p_{1}^{{{\prime }2}} }}{{p_{1} }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}}z = I(p_{1} ) - I(p_{0} ) \ge 0.$$
(3.203)

Since according to the proposition P8 the left side of the inequality in (3.203) is nonnegative, i.e.,

$$\frac{1}{\varepsilon }\int\limits_{ - \infty }^{\infty } {\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}}z = \frac{1}{\varepsilon }\left[ {I(p_{1} ) - I(p_{0} )} \right] \ge 0,$$
(3.204)

the conditions are fullfilled to introduce the limit value into the integral, i.e.,

$$\lim_{\varepsilon \to 0} \frac{1}{\varepsilon }\int\limits_{ - \infty }^{\infty } {\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}}z = \int\limits_{ - \infty }^{\infty } {\lim_{\varepsilon \to 0} \frac{1}{\varepsilon }\left[ {\frac{{p_{\varepsilon}^{{{\prime }2}} }}{{p_{\varepsilon} }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}}z.$$
(3.205)

If p ε in (3.205) is replaced by the defining expression (3.193), it is obtained that

$$\lim_{\varepsilon \to 0} \frac{1}{\varepsilon }\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right] = 2\frac{{p_{0}^{{\prime }} }}{{p_{0} }}\left[ {p_{1}^{{\prime }} - p_{0}^{{\prime }} } \right] - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}\left[ {p_{1} - p_{0} } \right].$$
(3.206)

According to (3.29) and (3.206) it follows further that

$$\lim_{\varepsilon \to 0} \frac{1}{\varepsilon }\int\limits_{ - \infty }^{\infty } {\left[ {\frac{{p_{\varepsilon }^{{{\prime }2}} }}{{p_{\varepsilon } }} - \frac{{p_{0}^{{{\prime }2}} }}{{p_{0} }}} \right]} {{d}} z = \int\limits_{ - \infty }^{\infty } {\left[ { - 2\psi_{0} g^{{\prime }} - \psi_{0}^{2} g} \right]} {{d}}z \ge 0,$$
(3.207)

where \(g = p_{1} - p_{0}\). Since he left side of the inequality (3.207) is equal to \(Q^{{\prime }} (0)\) (relation 3.200), it will be \(Q^{{\prime }} (0) \ge 0\), which concludes the proof of Theorem 3.2.

NOTE 1: The derivation shows that the right-hand side of the inequality (3.30) is a direct consequence of Cramér–Rao inequality and does not depend on the properties of the criterion function \(F_{0} ( \cdot )\).

NOTE 2: According to (3.201) the inverse statement is also valid. Namely, if the density \(p_{0} ( \cdot )\) is such that

$$\int\limits_{ - \infty }^{\infty } {\left[ { - 2\psi_{0} (z)g^{{\prime }} (z) - \psi_{0}^{2} (z)g(z)} \right]} {{d}}z \ge 0;\quad g(z) = p_{1} (z) - p_{0} (z)$$
(3.208)

for each \(p_{1} \in P\), then \(I^{{\prime }} (0) \ge 0\), and since \(I(p_{\varepsilon } )\) is a convex function, it will be \(I(0) = I(p_{0} ) \ge I(p)\) for \(\forall p \in P\), i.e., the density \(p_{0} ( \cdot )\) will minimize Fisher information amount \(I(p)\). In other words, the condition (3.208) represents the necessary and sufficient condition to minimize \(I(p)\) by the probability density distribution function \(p_{0} ( \cdot )\). Since \(I(p)\) is a convex function, the function \(p_{0}\) will also be convex, i.e. \(p_{0} \in P\).

Appendix 2—Analysis of Asymptotic Properties of Recursive Minimax Robust Estimation of Signal Amplitude

Asymptotic properties of recursive estimation of parameters (amplitude) of a signal are defined by Theorems 3.3 and 3.4. In this Appendix the proof of Theorem 3.3 is given, while the proof of the Theorem 3.4 is given in Sect. 3.4.

The proof of the first part of Theorem 3.3 follows directly from the procedure of reducing the algorithm (3.33) to Gladyshev’s theorem and determining the conditions under which the propositions of his theorem are satisfied.

The algorithm (3.33) can be written in the form

$$\theta (i) = \theta (i - 1) + \gamma (i)\left[ {R(\theta (i - 1)) + \eta (i)} \right],$$
(3.209)

where

$$R(\theta ) = - E_{\xi } \left\{ {\psi (\theta^{*} - \theta + z)} \right\} = - \int\limits_{ - \infty }^{\infty } {\psi (z-\theta + \theta^{*} )p_{\xi } (z){{d}}z}$$
(3.210)

is the regression function (3.35), and

$$\eta (i) = \psi \left( {y(i) - \theta (i - 1)} \right) - R\left( {\theta (i - 1)} \right)$$
(3.211)

is the stochastic component of the algorithm, appearing because of inaccurate measurements of the regression function. In order to ensure the almost sure convergence of the estimation (3.209) toward the solution ϴ * of the regression Eq. (3.35), it is sufficient to satisfy the following propositions of Gladyshev’s theorem

$$\mathop {\inf }\limits_{{\varepsilon < \left| {\theta - \theta^{*} } \right| < \varepsilon^{ - 1} }} \left[ {R(\theta ) - R(\theta^{*} )} \right] > 0\quad \text{for}\;\forall \varepsilon > 0,$$
(3.212)
$$E_{\xi } \left\{ {\psi^{2} \left( {\theta^{*} - \theta } + z\right)} \right\} < d\left[ {1 + \left( {\theta - \theta^{*} } \right)^{2}} \right]\quad \text{; }\;\exists d > 0\;\text{and}\;\forall \theta ,$$
(3.213)
$$\sum\limits_{i = 1}^{\infty } {\gamma (i) = \infty } ;\quad \sum\limits_{i = 1}^{\infty } {\gamma^{2} (i) < \infty } .$$
(3.214)

Since according to the proposition P2 \(\psi ( \cdot )\) is an odd function, and according to P1 the density distribution function of perturbation \(p_{\xi } ( \cdot )\) is symmetric around zero, \(R(\theta )\) is an odd function and has a unique root \(\theta = \theta^{*}\), i.e.,

$$\left. {R(\theta )} \right|_{{\theta = \theta^{*} }} = - E_{\xi }\,\left\{ {\psi (\theta^{*} - \theta^{*} + z)} \right\} = 0.$$
(3.215)

Since in the considered case \(R_{0} = 0\), \(\theta = \theta^{*}\) is also the unique solution of the regression Eq. (3.35).

According to the proposition P5 the function \(R(\theta )\) is differentiable in the point \(\theta = \theta^{*}\), thus it is continuous in that point. Since

$$\begin{aligned} \left| {R(\theta + \varepsilon ) - R(\theta )} \right| & = \left| {\int\limits_{ - \infty }^{\infty } {\left\{ {\psi (\theta^{*} - \theta - \varepsilon + z) - \psi (\theta^{*} - \theta + z)} \right\}p_{\xi } (z){{d}}z} } \right| \le\\ & \quad \le \int\limits_{ - \infty }^{\infty } {\left| {\psi (\theta^{*} - \theta - \varepsilon + z) - \psi (\theta^{*} - \theta + z)} \right|p_{\xi } (z){{d}}z} \mathop \to \limits_{\varepsilon \to 0} 0, \\ \end{aligned}$$
(3.216)

the function \(R(\theta )\) is also continuous for each \(\theta \ne \theta^{*}\).

Let us consider further the function

$$\begin{aligned} \phi (a) & = R(\theta ) - R(\theta^{*} ) =\\ & = \int\limits_{ - \infty }^{\infty } {\left\{ {\psi (t + a) - \psi (t)} \right\}p_{\xi } (t){{d}}t} =\\ & = \int\limits_{ - \infty }^{ - a/2} {\left\{ {\psi (t + a) - \psi (t)} \right\}p_{\xi } (t){{d}}t} - \int\limits_{ - a/2}^{\infty } {\left\{ {\psi (t + a) - \psi (t)} \right\}p_{\xi } (t){{d}}t} , \\ \end{aligned}$$
(3.217)

where \(a = \theta - \theta^{*} > 0\). If one introduces the replacement \(t = - v\) in the first integral and \(t + a = v\) in the second, it follows that

$$\begin{aligned} \phi (a) & = \int\limits_{a/2}^{\infty } {\left\{ {\psi (a - v) - \psi ( - v)} \right\}p_{\xi } ( - v){{d}}v} +\\ & \quad + \int\limits_{a/2}^{\infty } {\left\{ {\psi (v) - \psi (v - a)} \right\}p_{\xi } (v - a){{d}}v} =\\ & = \int\limits_{a/2}^{\infty } {\left\{ {\psi (v) - \psi (v - a)} \right\}\left\{ {p_{\xi } (v - a) - p_{\xi } (v)} \right\}{{d}}v} . \\ \end{aligned}$$
(3.218)

Since \(\psi ( \cdot )\) is an odd, monotonically nondecreasing function (propositions P3, P2) it follows that \(\psi (v) - \psi (v - a) = \psi (v) + \psi (a - v) > 0\) for \(v > a/2\), and since \(p_{\xi } ( \cdot )\) is an even, monotonically decreasing function (propositions P3, P1), \(p(v - a) - p(v) = p(a - v) - p(v) > 0\) for \(v > a/2\), i.e. the integrand function in (3.218) is positive in the integration range \((a/2,\infty )\), so that \(\phi (a) > 0\), i.e., \(R(\theta ) > R(\theta^{*} ) = 0\) for \(\theta > \theta^{*}\).

The last condition means that \(R'(\theta^{*} ) > 0\), which implies that \(R(\theta )\) is a monotonically nondecreasing function. Since the function \(R(\theta )\) is also continuous, it will be \(\left( {\theta - \theta^{*} } \right)R(\theta ) > 0\) for \(\forall \theta \ne \theta^{*}\), thus the condition (3.212) is satisfied.

The condition (3.213) is fulfilled as a direct consequence of P4, i.e.

$$E_{\xi } \left\{ {\psi^{2} \left( {\theta^{*} - \theta + z} \right)} \right\} = \int\limits_{ - \infty }^{\infty } {\psi^{2} \left( {\theta^{*} - \theta + z} \right)p_{\xi } (z){{d}}z}\,<\,k^{2}$$
(3.219)

if \(\psi ( \cdot )\) is a limited function, i.e., \(\left| {\psi (z)} \right| \le k\), or it does not increase faster than a linear function, i.e.,

$$E_{\xi } \left\{ {\psi^{2} \left( {\theta^{*} - \theta + z} \right)} \right\} \le k^{2} E_{\xi } \left\{ {1 + \left( {\theta^{*} - \theta + z} \right)^{2} } \right\} \le k^{2} \left[ {1 + \left( {\theta^{*} - \theta } \right)^{2} } \right],$$
(3.220)

since according to the proposition P5 the noise variance ξ is limited.

The condition (3.214) is identical to the proposition P6, which concludes the proof of the first part of the theorem.

The second part of the theorem follows directly from the following propositions of the Robbins–Monroe theorem

$$\sum\limits_{i = 2}^{\infty } {\frac{\gamma (i)}{\gamma (1) + \cdots + \gamma (i - 1)}} = \infty ;\quad \sum\limits_{i = 2}^{\infty } {\gamma^{2} (i)} < \infty ,$$
(3.221)
$$\left| {\psi (\theta^{*} - \theta + z}) \right|\,<\,k,$$
(3.222)
$$\left( {\theta - \theta^{*} } \right)\left[ {R(\theta ) - R(\theta^{*} )} \right] > 0\quad \text{for}\;\forall \theta \ne \theta^{*} ,$$
(3.223)
$$R^{{\prime }} (\theta^{*} ) > 0,$$
(3.224)

which represent sufficient conditions of mean-square convergence of the estimation (3.209) to the root of the regression Eq. (3.35), bearing in mind that in the case under consideration \(R_{0} = 0\).

The condition (3.222) is identical to the first part of the proposition P4. The propositions (3.223) and (3.224) mean that \(R( \cdot )\) is a monotonically nondecreasing function, which has been shown in the first part of the proof of the theorem. Since the harmonic sequence \(\gamma (i) = \gamma /i\) satisfies the condition (3.221), this proves the second part of the theorem as well.

The proof of the third part of the theorem is based on the determination of the conditions under which the following propositions of the Sacks theorem are satisfied

$$R(\theta^{*} ) = 0;\quad \left( {\theta - \theta^{*} } \right)R(\theta ) > 0\quad \text{for}\;\forall \theta \ne \theta^{*} ,$$
(3.225)
$$R^{{\prime }} (\theta^{*} ) > 0$$
(3.226)

\(\left| {R(\theta )} \right| \le k_{1} \left| {\theta - \theta^{*} } \right|\quad \text{for}\;\forall \theta \ne \theta^{*} \;\text{and}\;k_{1} > 0,\)

$$\mathop {\inf }\limits_{{t_{1} \le \left| {\theta - \theta^{*} } \right| \le t_{2} }} \left| {R(\theta )} \right| > 0\quad \text{for}\;\forall t_{1} ,t_{2} > 0\;\text{and}\;0 < t_{1} < t_{2} < \infty ,$$
(3.227)
$$\mathop {\sup }\limits_{\theta} E\left\{ {\eta^{2} (\theta )} \right\} < \infty \quad \mathop {\lim }\limits_{{\theta \to \theta^{*} }} E\left\{ {\eta^{2} (\theta )} \right\} = \alpha ,$$
(3.228)
$$\mathop {\lim }\limits_{R \to \infty } \mathop {\lim }\limits_{\varepsilon \to 0 + } \mathop {\sup }\limits_{{\left| {\theta - \theta^{*} } \right| < \varepsilon }} \int\limits_{\left| \eta \right| > R} {\eta^{2} (\theta )p_{\xi } (z){d}z} = 0,$$
(3.229)

which represent sufficient conditions of asymptotic normality of the estimation (3.209), where asymptotic variance is defined by expression (3.39).

The conditions (3.225) and (3.226), as shown, are direct consequences of the theorem Presumptions P1–P5.

Since \(R( \cdot )\) is a differentiable function in the point \(\theta = \theta^{*}\) (Presumption P5), it can be expanded in a Taylor series

$$R(\theta ){ \sim }R^{{\prime }} (\theta^{*} )\left( {\theta - \theta^{*} } \right) + v(\theta )\left( {\theta - \theta^{*} } \right)$$
(3.230)

where it was taken into account that \(R(\theta^{*} ) = 0\) and \(\mathop {\lim }\limits_{{\theta \to \theta^{*} }} v(\theta ) = v(\theta^{*} ) = 0\). Since \(R( \cdot )\) is an odd function, it is sufficient to consider only the case \(\theta - \theta^{*} > 0\). Expression (3.232) means that for each \(\varepsilon > 0\) there will be a number \(\delta (\varepsilon ) > 0\) such that

$$\frac{{R(\theta ) - R'(\theta^{*} )\left( {\theta - \theta^{*} } \right)}}{{\theta - \theta^{*} }} < \varepsilon \quad \text{for}\;\left( {\theta - \theta^{*} } \right) < \delta (\varepsilon ),$$
(3.231)

i.e.

$$R(\theta ) < \left[ {R^{{\prime }} (\theta ) + \varepsilon } \right]\left( {\theta - \theta^{*} } \right) < k(\varepsilon )\left( {\theta - \theta^{*} } \right)\quad {\text{for } }\left( {\theta - \theta^{*} } \right) < \delta (\varepsilon ),$$
(3.232)

bearing in mind that \(R^{{\prime }} (\theta ) > 0\).

Since, further, according to the presumption P4 the function \(\psi ( \cdot )\) is either limited or does not increase further than a linear function, we have

$$\left| {R(\theta )} \right| = \left| {\int\limits_{ - \infty }^{\infty } {\psi \left( {\theta^{*} - \theta + z} \right)p_{\xi } (z){{dz}}} } \,\right| \le k\quad {\text{for}}\; \left| {\theta^{*} - \theta } \right| > \delta (\varepsilon ),$$
(3.233)

or

$$\left| {R(\theta )} \right| \le k\int\limits_{ - \infty }^{\infty } {\left[ {1 + \left| {\theta^{*} - \theta + z} \right|} \right]p_{\xi } (z){{d}}z} \le k_{1} \left| {\theta^{*} - \theta } \right|\quad {\text{for}}\; \left| {\theta^{*} - \theta } \right| > \delta (\varepsilon ),$$
(3.234)

It follows from (3.232)–(3.234) that \(\left| {R(\theta } \right| < k_{2} \left| {\theta^{*} - \theta } \right|\) for \(\forall \theta \ne \theta^{*}\) and \(k_{2} > 0\), thus the first part of the condition (3.227) is satisfied.

The second part of the condition (3.227) is a direct consequence of the fact that \(R(\theta )\) is a monotonically nondecreasing function, which implies \(R'(\theta ) > 0\), i.e., \(\inf \left| {R'(\theta )} \right| > 0\).

Bearing in mind the presumption P4 and relations (3.211), (3.233), and (3.234) it follows that

$$\left| {\eta (\theta ,z)} \right| = \left| {\psi (\theta^{*} - \theta + z) - R(\theta )} \right| \le \left| {\psi (\theta^{*} - \theta + z)} \right| + \left| {R(\theta )} \right| \le k$$
(3.235)

if \(\psi ( \cdot )\) is a uniformly limited function, or

$$\left| {\eta (\theta ,z)} \right| \le k_{1} \left[ {1 + \left| {\theta^{*} - \theta } \right|} \right] + k_{2}\left|z\right|$$
(3.236)

if \(\psi ( \cdot )\) does not increase faster than a linear function, which further implies that \(\eta^{2} (\theta ,z) \le k^{2}\) or

$$\eta^{2} (\theta ,z) \le \left[ {k_{1} + k_{1} \left| {\theta^{*} - \theta } \right| + k_{2} \left| z \right|} \right]^{2} \le 2\left[ {k_{1}^{2} + k_{1}^{2} \left( {\theta^{*} - \theta } \right)^{2} + k_{2}^{2} z^{2} } \right],$$

i.e.,

$$E\left\{ {\eta^{2} \left( {\theta ,z} \right)} \right\} \le k^{2} < \infty ,$$
(3.237)

or

$$E\left\{ {\eta^{2} \left( {\theta ,z} \right)} \right\} \le 2k_{1}^{2} + 2k_{1}^{2} \left( {\theta^{*} - \theta } \right)^{2} + 2k_{2}^{2} E\left[ {z^{2} } \right] < \infty \quad \text{for}\;\varepsilon < \left| {\theta^{*} - \theta } \right| < \varepsilon^{ - 1} ,$$
(3.238)

since the perturbation variance is limited according to the presumption P4. Since it is valid that

$$\begin{aligned} \mathop {\lim }\limits_{{\theta \to \theta^{*} }} E\left\{ {\eta^{2} (\theta ,z)} \right\} & = \mathop {\lim }\limits_{{\theta \to \theta^{*} }} \int\limits_{ - \infty }^{\infty } {\left[ {\psi \left( {\theta^{*} - \theta + z} \right) - R(\theta )} \right]^{2} p_{\xi } (z){{d}}z} =\\ & = \int\limits_{ - \infty }^{\infty } {\psi^{2} (z)p_{\xi } (z){{d}}z} = \alpha (\theta^{*} ), \\ \end{aligned}$$
(3.239)

the condition (3.238) is also satisfied.

Bearing in mind that

$$\mathop {\lim }\limits_{R \to \infty } \mathop {\lim }\limits_{\varepsilon \to 0 + } \mathop {\sup }\limits_{{\left| {\theta - \theta^{*} } \right| < \varepsilon }} \int\limits_{\left| \eta \right| > R} {\eta^{2} \left( {\theta ,z} \right)p_{\xi } (z){d}z} \le \mathop {\lim }\limits_{R \to \infty } \mathop {\sup }\limits_{{\left| {\theta - \theta^{*} } \right| < \varepsilon }} \int\limits_{\left| \eta \right| > R} {\eta^{2} \left( {\theta ,z} \right)p_{\xi } (z){d}z}$$
(3.240)

and since according to (3.236) and (3.237) the expectation of a random variable \(\eta^{2} ( \cdot )\) is finite, this means that the presumption (3.229) is satisfied, which concludes the proof of the theorem.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Academic Mind and Springer International Publishing AG

About this chapter

Cite this chapter

Kovačević, B., Milosavljević, M., Veinović, M., Marković, M. (2017). Fundamentals of Robust Parameter Estimation. In: Robust Digital Processing of Speech Signals. Springer, Cham. https://doi.org/10.1007/978-3-319-53613-2_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-53613-2_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-53611-8

  • Online ISBN: 978-3-319-53613-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics