# Second-order asymptotics in a class of purely sequential minimum risk point estimation (MRPE) methodologies

- 185 Downloads

## Abstract

Under the squared error loss plus linear cost of sampling, we revisit the *minimum risk point estimation* (MRPE) problem for an unknown normal mean \(\mu\) when the variance \(\sigma ^{2}\) also remains unknown. We begin by defining a new class of purely sequential MRPE methodologies based on a general estimator \(W_{n}\) for \(\sigma\) satisfying a set of conditions in proposing the requisite stopping boundary. Under such appropriate set of sufficient conditions on \(W_{n}\) and a properly constructed associated stopping variable, we show that (i) the normalized stopping time converges in law to a normal distribution (Theorem 3.3), and (ii) the square of such a normalized stopping time is uniformly integrable (Theorem 3.4). These results subsequently lead to an * asymptotic second-order* expansion of the associated *regret* function in general (Theorem 4.1). After such general considerations, we include a number of substantial illustrations where we respectively substitute appropriate multiples of *Gini’s mean difference* and the *mean absolute deviation * in the place of the general estimator \(W_{n}\). These illustrations show a number of desirable asymptotic *first-order* and * second-order* properties under the resulting purely sequential MRPE strategies. We end this discourse by highlighting selected summaries obtained via simulations.

## Keywords

Asymptotic first-order properties Asymptotic second-order properties Linear cost Regret Risk efficiency Sequential strategy Simulations Squared error loss## Mathematics Subject Classification

62L10 62L12 62G05 62G20## 1 Introduction and a brief review

Purely sequential estimation methodologies date back to path-breaking papers of Anscombe (1950, 1952, 1953), Ray (1957), and Chow and Robbins (1965). Anscombe, Ray, and Chow and Robbins gave a solid foundation to establish purely sequential fixed-width confidence interval estimation methodologies for an unknown normal mean \(\mu\) when the population variance \(\sigma ^{2}\) remained unknown. Indeed, Chow and Robbins (1965) brought forward the fundamental nature of the theory of purely sequential nonparametric fixed-width confidence interval estimation methodologies.

The far reaching purely sequential *minimum risk point estimation* (MRPE) methodology was originally developed by Robbins (1959) for an unknown normal mean \(\mu\) when the population variance \(\sigma ^{2}\) remained unknown. Under the *squared error loss* (SEL) plus linear cost of sampling, the sequential estimation strategy of Robbins (1959) was subsequently broadened by Starr (1966) and Starr and Woodroofe (1969) where desirable asymptotic properties associated with *efficiency*, * risk efficiency*, and *regret* were proved. Second-order properties of the associated regret were further developed by Lai and Siegmund (1977, 1979), Woodroofe (1977), and Mukhopadhyay (1988). It is appropriate to mention that Ghosh and Mukhopadhyay (1981) first introduced the notion of *asymptotic second-order efficiency*.

In a distribution-free situation, Mukhopadhyay (1978) and Ghosh and Mukhopadhyay (1979) first developed MRPE problems for the estimation of a population mean under the squared error loss plus linear cost of sampling and proved asymptotic *risk efficiency*. Chow and Yu (1981), Chow and Martinsek (1982), and a series of follow-up broadened this area significantly. Sen and Ghosh (1981) developed nonparametric sequential point estimation of the mean of a U-statistic. They concluded the asymptotic *first-order efficiency*, *risk efficiency*, as well as other elegant asymptotics.

Mukhopadhyay (1982) suggested using a broader class of (nonparametric) estimators of \(\sigma ^{2}\) instead of using the customary sample variance (or sample standard deviation) as an estimator of the unknown parameter \(\sigma ^{2}\) (or \(\sigma\)) in the stopping rules. Chattopadhyay and Mukhopadhyay (2013) and Mukhopadhyay and Hu (2017, 2018) have recently looked into purely sequential estimation strategies using appropriate multiples of functions of *Gini’s mean difference *(GMD) or *mean absolute deviation *(MAD) as possible substitutes of the traditional sample variance (or sample standard deviation).

In this paper, we primarily focus on purely sequential sampling strategies rather than two-stage and other multi-stage sampling methods. We also deliberately keep the literature on numerous multivariate and regression problems out of this present discourse. To obtain the status of wide-ranging two-stage inference methods, one may additionally refer to Ghosh and Mukhopadhyay (1976, 1981), Aoshima and Mukhopadhyay (2002), Chattopadhyay and Mukhopadhyay (2013), and other more recent sources. One may also achieve a comprehensive review of this field by looking at the monographs by Sen (1981, 1985), Siegmund (1985), Ghosh and Sen (1991), Mukhopadhyay and Solanky (1994), Jurečkovā and Sen (1996), Ghosh et al. (1997), Mukhopadhyay et al. (2004), Mukhopadhyay and de Silva (2009), Zacks (2009, 2017), and other relevant sources.

*minimum risk point estimation*(MRPE) problem for an unknown normal mean \(\mu\) when the variance \(\sigma ^{2}\) also remains unknown. We begin with the formulation of a new and general class of purely sequential sampling methodologies based on an arbitrary estimator

*asymptotic risk efficiency*(Theorem 3.2) and

*asymptotic second-order*expansion of the associated

*regret*function (Theorem 3.4) in general.

After these general considerations, we include substantial illustrations where we respectively substitute *Gini’s mean difference* (GMD) and the *mean absolute deviation* (MAD) in the place of the general estimator \(W_{n}\). These illustrations are followed by a number of desirable asymptotic *first-order* and *second-order* properties under the resulting purely sequential sampling strategies. We end this discourse by highlighting selected summaries obtained via simulations.

The objective of this paper is to revisit in depth the purely sequential minimum risk point estimation methodologies involving GMD or MAD established in Mukhopadhyay and Hu (2017). Having proposed a new purely sequential methodology based on nonparametric estimators with some certain conditions satisfied, we develop asymptotic second-order results which are considerably stronger than those reported. The formulations of the newly proposed methodology are presented in Sect. 2. The main theorems are laid down in Sects. 3 and 4 under generality along with some of the substantial proofs. In Sect. 5, some illustrations are provided followed by summaries from simulations presented in Sect. 6. We end with some concluding thoughts.

## 2 A general formulation

Throughout this presentation, we propose to estimate the population mean \(\mu\) by the sample mean, \(\overline{X}_{n}\), based on *n* observations \(X_{1},\ldots ,X_{n}\). The sample variance \(S_{n}^{2}\) or the sample standard deviation \(S_{n}\) is regarded as a customary estimator of \(\sigma ^{2}\) or \(\sigma\). One should realize that sometimes one may use an appropriate multiple of \(S_{n}^{2}\) or \(S_{n}\) as needed and note that they are respectively consistent estimators of \(\sigma ^{2}\) or \(\sigma\). We first review the purely sequential MRPE methodology of Robbins (1959) very briefly in Sect. 2.1 followed by our new general theory.

### 2.1 The purely sequential MRPE methodology of Robbins (1959)

*c*is the unit cost of each observation, and

*n*is the sample size. The loss function (2.1) would attempt to balance between the risk due to estimation error for using \(\overline{X}_{n}\) to estimate \(\mu\) and the sampling cost of

*n*observations.

The minimum risk turns out to be \(R_{n^{*}}\left( c\right)\) which simplifies to \(2cn^{*}\). Our goal is to achieve this minimum risk approximately and we will remain mindful to point out whether this holds reasonably well in the sense of first-order or second-order of asymptotic approximation.

### 2.2 A new general purely sequential MRPE methodology

Next, we keep our option open by building a broad structure having considered an appropriate *consistent* nonparametric estimator in general for \(\sigma\), denoted by \(W_{n}\), assumed positive w.p.1. That is, \(W_{n}\) may not necessarily be a multiple of the sample standard deviation \(S_{n}\).

Eventually, our aim is to prove uniform integrability of \(N_{\mathcal {P} }^{*2}\) where \(N_{\mathcal {P}}^{*}\) stands for the standardized stopping time, namely \(n^{*-1/2}(N_{\mathcal {P}}-n^{*}),\) associated with the methodology \(\mathcal {P}\) from (2.5). Under the methodology \(\mathcal {P}_{0}\) from (2.4), the uniform integrability of \(N_{\mathcal {P} _{0}}^{*2}\) was proved by Lai and Siegmund (1977, 1979) and Woodroofe (1977, 1982) by exploiting the nonlinear renewal theory in sequential analysis.

On the other hand, Ghosh and Mukhopadhyay (1980) verified an analogous result with the help of a directly constructed proof. See also Mukhopadhyay (1988), Mukhopadhyay and Solanky (1994, pp. 48–52), Ghosh et al. (1997, pp. 58–65), and Mukhopadhyay and de Silva (2009, pp. 155–160), and other relevant sources.

## 3 Main results under the general methodology \(\mathcal {P}\) from (2.5)

In this section, we devote our attention to the purely sequential MRPE methodology \(\mathcal {P}\) alone. Also, we will generally use [*u*] to denote the largest integer that is smaller than \(u(>0)\). It should be clear from the context when this notation is used in the proofs. Also, *I*(*A*) would stand for the indicator function of an event *A*

- (C1)
*Independence*:$$\begin{aligned} \overline{X}_{n} \hbox { and } \{W_{\underset{}{k}}; 2\le k\le n\} \hbox { are distributed independently for all } n\ge 2. \end{aligned}$$ - (C2)
*Convergence in probability*:$$\begin{aligned} W_{\underset{}{n}}\overset{P_{\mu ,\sigma }}{ \rightarrow }\sigma \hbox { as } n\rightarrow \infty . \end{aligned}$$ - (C3)
*Aysmptotic normality*:$$\begin{aligned} \sqrt{n}\left( \sigma ^{-1}W_{\underset{}{n}}-1\right) \overset{\pounds }{\rightarrow }N\left( 0,\delta ^{2}\right) \hbox { for some } \delta (>0) \hbox { as } n\rightarrow \infty . \end{aligned}$$ - (C4)
*Uniform continuity in probability*(*u.c.i.p*):For every \(\varepsilon >0\), there exists a large \(\nu \equiv \nu \left( \varepsilon \right)\) and small \(\gamma >0\) for which$$\begin{aligned} P_{\mu ,\sigma }\left( \max \limits _{0\le k\le n\gamma }\left| W_{n+k}-W_{n}\right| \ge \varepsilon \right) <\varepsilon \hbox { holds for any } n\ge \nu . \end{aligned}$$ - (C5)
*Kolmogorov’s inequality*:For every \(\varepsilon >0\), and some \(2\le n_{1}\le n_{2}\),$$\begin{aligned} P_{\mu ,\sigma }\left( \max \limits _{n_{1}\le n\le n_{2}}\left| W_{n}-\sigma \right| \ge \varepsilon \right) \le \varepsilon ^{-r}E_{\mu ,\sigma }\left[ \left| W_{n_{1}}-\sigma \right| ^{r}\right] , \hbox { with } r\ge 2. \end{aligned}$$ - (C6)
*Order of central absolute moments*:$$\begin{aligned} \hbox {For } \underset{}{n\ge 2} \hbox { and } r\ge 2, E_{\mu ,\sigma }\left[ \left| W_{n}-\sigma \right| ^{r}\right] =O\left( n^{-r/2}\right) . \end{aligned}$$ - (C7)Wiener’s condition:$$\begin{aligned} E_{\mu ,\sigma }\left[ \sup \nolimits _{n\ge 2}W_{n}\right] <\infty . \end{aligned}$$

### Lemma 3.1

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the condition*(C1),

*the expressions of the associated risk efficiency and regret are respectively given by*:

*with*\(n^{*}\)

*defined by*(2.3).

### Proof

Under the condition (C1), from (2.2) we can claim that \(E_{\mu ,\sigma }\left[ L_{N_{\mathcal {P}}}\left( \mu ,\overline{X}_{N_{ \mathcal {P}}}\right) \right] =A\sigma ^{2}E_{\mu ,\sigma }\left[ N_{\mathcal { P}}^{-1}\right] +cE_{\mu ,\sigma }\left[ N_{\mathcal {P}}\right]\). Then, the results follow from (2.7) since \(R_{n^{*}}\left( c\right) =2cn^{*}\). \(\square\)

### Lemma 3.2

*The condition* (C4) * follows from the two conditions* (C5) *and* (C6) *combined*.

### Proof

*r*such that \(O\left( \left( \nu \varepsilon ^{2}\right) ^{-r/2}\right) <\varepsilon\), it should be clear that the Anscombe’s (1952) u.c.i.p. property for the sequence \(\left\{ W_{n};\text { }n\ge 2\right\}\) holds. \(\square\)

### Theorem 3.1

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the conditions*(C2) and (C7),

*we have*:

*with*\(n^{*}\)

*defined by*(2.3).

### Proof

*c*, the right-hand side of (3.3) can be bounded as follows:

### Remark 3.1

We may use a technique similar to the one which led to (3.4) to claim that \(E_{\mu ,\sigma }[N_{\mathcal {P} }^{k}]<\infty\) for any fixed \(k>0\) after combining with (3.3). Indeed, one may verify: \(\lim \limits _{c\rightarrow 0}E_{\mu ,\sigma }[\left( N_{\mathcal { P}}/n^{*}\right) ^{k}]=1\) for any fixed \(k>0,\) under (C2) and (C7).

### Lemma 3.3

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the conditions*(C5)–(C6),

*for any arbitrary*\(0<\eta <1\) with \(r\ge 2,\)

*we have*:

*with*\(n^{*}\)

*defined by*(2.3).

### Proof

*u*] stand for the largest integer that is smaller than \(u(>0)\) and we define:

*c*. We may express:

### Theorem 3.2

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the conditions*(C1)–(C2) and (C5)–(C7),

*we have*:

*with the risk efficiency term*\(\xi _{\mathcal {P}}(c)\)

*coming from Lemma*3.1.

### Proof

*I*(

*A*) stands for the indicator function of an event

*A*.

We observe that \((0<)J_{1}<2\) and a bounded random variable is uniformly integrable. Also, \(J_{1}\overset{P_{\mu ,\sigma }}{\rightarrow }1\) as \(c\rightarrow 0\). Hence, \(E_{\mu ,\sigma }\left[ J_{1}\right] =1+o(1)\) as \(c\rightarrow 0\).

### Remark 3.2

### Remark 3.3

It is clear that our brief justifications of Theorems 3.1 and 3.2 under general conditions mildly overlap with the lines of proofs of Theorems 3.2 and 4.2 from Mukhopadhyay and Hu (2017). However, we keep these justifications for identifying which sufficient conditions from (C1) to (C7) are specifically used in validating the respective steps.

The next result follows from combining Anscombe’s (1952) random * central limit theorem* (random CLT) with a formal technique developed by Ghosh and Mukhopadhyay (1975) to transfer the asymptotic distributions of \(W_{N_{\mathcal {P}}},W_{N_{\mathcal {P}}-1}\) to conclude an asymptotic distribution of \(N_{\mathcal {P}}\). One may refer to Gut (2012) and Mukhopadhyay and Chattopadhyay (2012) to take into account some recent treatments of the random CLT.

The Ghosh-Mukhopadhyay theorem comes from Mukhopadhyay (1975, Chapter 2) which was fully utilized by Carroll (1977) right away to obtain asymptotic distributions of his stopping rules based on certain robust statistics. One may additionally review from Mukhopadhyay and Solanky (1994, Section 2.4), Ghosh et al. (1997, Exercise 2.7.4), among other sources.

### Theorem 3.3

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the conditions*(C2)–(C6)

*we have as*\(c\rightarrow 0\):

*with*\(n^{*}\)

*defined by*(2.3)

*and*\(\delta ^{2}(>0)\)

*coming from*(C3).

### Proof

### Theorem 3.4

*For the purely sequential MRPE methodology*\((\mathcal {P},\overline{X}_{N_{\mathcal {P}}})\)

*given by*(2.5),

*under the conditions*(C2)–(C6)

*we have as*\(c\rightarrow 0\):

*for sufficiently small*\(c\le c_{0}\)

*with some*\(c_{0}(>0)\), \(n^{*}\)

*defined by*(2.3)

*and*\(\delta ^{2}(>0)\)

*coming from*(C3).

### Proof

Instead of appealing to nonlinear renewal theory, we proceed to prove this lemma in the spirit of the direct proofs from Ghosh and Mukhopadhyay (1980) and Ghosh et al. (1997, Lemma 7.2.3, pp. 217–219). Recall that [*u*] denotes the largest integer that is smaller than \(u(>0)\).

*c*, say \(c\le c_{3}\) with some \(c_{3}(>0),\) we get:

A fair question one may ask is the following: What kinds of statistics \(W_{n}\) would certainly satisfy the condition (C1)? Next result points in that direction.

### Lemma 3.4

*For any fixed*\(n(\ge 2)\), *suppose that*\(\overline{X}_{n}\)*continues to stand for the customary sample mean and let*\(W_{n}\)*be a statistic which exclusively involves only*\(\mathbf {Y}_{n}=(X_{1}-X_{n},X_{2}-X_{n},\ldots ,X_{n-1}-X_{n})\)*and**n*. *Then*, \(\overline{X}_{n}\)*and*\((W_{2},W_{3},\ldots ,W_{n})\)*are independent for all fixed*\(n\ge 2\).

### Proof

With *n* fixed, the distribution of \(\mathbf {Y} _{n}\) does not involve the unknown parameter \(\mu\). Now, fix \(\sigma =\sigma _{0}(>0)\) and consider the family of distributions, \(N(\mu ,\sigma _{0}^{2})\). In this family \(N(\mu ,\sigma _{0}^{2})\), \(\overline{X}_{n}\) is a complete and sufficient statistic for \(\mu ,\) but \(\mathbf {Y}_{n}\) is an ancillary statistic, that is, \(\mathbf {Y}_{n}\)’s distribution does not involve \(\mu\). Thus, by appealing to Basu’s (1955) theorem, we can conclude that \(\overline{X}_{n}\) and \(\mathbf {Y}_{n}\) are independently distributed statistics. This statement is true for every fixed \(\sigma =\sigma _{0}(>0)\).

Thus, \(\overline{X}_{n}\) and \(\mathbf {Y}_{n}\) are independently distributed statistics in the family \(N(\mu ,\sigma ^{2})\) where \(\mu ,\sigma ^{2}\) are both unknown parameters.

Next, since \(W_{n}\) involves only \({\mathbf {Y}}_{n}\), clearly \(\overline{X} _{n}\) and \(W_{n}\) are independently distributed statistics in the family \(N(\mu ,\sigma ^{2})\) where \(\mu ,\sigma ^{2}\) are both unknown parameters. \(\square\)

### Remark 3.4

The reader will find a number of concrete examples of \(W_{n}\) in Sect. 5 satisfying the condition (C1). One may note that the result will hold even if \(\mathbf {W}_{n}\) is vector-valued. There are more applications of Basu’s (1955) theorem along these lines in Mukhopadhyay (2000, pp. 324–327).

## 4 The main result: asymptotic second-order expansion of the regret

The regret function, \(\omega _{\mathcal {P}}\left( c\right)\) from (2.7), associated with the purely sequential MRPE methodology \(\mathcal {P}\) from (2.5) was explicitly shown in Lemma 3.1. Now, we proceed with the second-order expansion of \(\omega _{\mathcal {P}}\left( c\right)\).

### Theorem 4.1

*Consider the regret function*\(\omega _{\mathcal {P}}\left( c\right)\)

*from*(2.7),

*associated with the purely sequential MRPE methodology*\(\mathcal {P}\)

*from*(2.5).

*Under the conditions*(C1)–(C7),

*we have as*\(c\rightarrow 0\):

*with*\(\delta ^{2}(>0)\)

*coming from*(

*C3*).

### Proof

## 5 Illustrations

### 5.1 Illustration 0: \(\mathcal {P}\equiv \mathcal {P}_{0}\)

*Z*has all positive moments finite since the

*X*’s have all positive moments finite. In other words, \(W_{n}\) substituted by \(S_{n}\) will satisfy all the stated conditions (C1)–(C7).

### 5.2 Illustration 1: \(\mathcal {P}\equiv \mathcal {P}_{1}\)

*o*(

*c*). Its direct proof will follow right away from our main result, Theorem 4.1. One additional comment may be in order: Observe that \(\lim _{n\rightarrow \infty }a_{n}=1\) and hence for large enough \(n(\ge n_{0})\), we may claim that \(\left| a_{n}\right| \le 2\) and \(\left| a_{n}-1\right| \le 1\). Thus, we can express (w.p.1)

### 5.3 Illustration 2: \(\mathcal {P}\equiv \mathcal {P}_{2}\)

*Gini’s Mean Difference*(

*GMD*) defined as follows:

Now, (C1) follows from Lemma 3.4 since \(\{G_{k};\)\(2\le k\le n\}\) is location invariant for every fixed \(n(\ge 2)\). (C2)–(C3) follow from Hoeffding (1948). (C5)–(C6) will follow from the proof of Theorem 4.1 in Mukhopadhyay and Hu (2017, \(i=1\)). (C4) follows from (C5) to (C6) as shown in Lemma 3.2. (C7) follows from Mukhopadhyay and Hu (2017, Lemma 3.3). A direct proof of (5.8) will follow immediately from our main result, Theorem 4.1, since \(W_{n}\) substituted by \(G_{n}\) satisfies all the stated conditions (C1)–(C7).

### 5.4 Illustration 3: \(\mathcal {P}\equiv \mathcal {P}_{3}\)

*Mean Absolute Deviation*(MAD). As an alternative estimator of the population standard deviation \(\sigma\), MAD came under practical scrutiny with regard to robustness issues. The MAD is defined as follows:

### Remark 5.1

Obviously, there can be a large number of choices of suitable \(W_{n}\). We have exhibited four different choices of \(W_{n}\) leading to the methodologies \(\mathcal {P\equiv P}_{0},\mathcal {P}_{1}, \mathcal {P}_{2},\) and \(\mathcal {P}_{3}\) respectively. Which one stands out? There is no simple answer. By comparing the second-order expansions of the regret functions alone, we feel that (i) \(\mathcal {P}_{0},\mathcal {P}_{1}\) would perform nearly identically, but (ii) \(\mathcal {P}_{2}\) may be marginally preferable to \(\mathcal {P}_{3}\). On the other hand, if it is important to require a methodology that would withstand some possible outliers, then (i) \(\mathcal {P}_{0},\mathcal {P}_{1}\) are not very desirable, (ii) \(\mathcal {P}_{2},\mathcal {P}_{3}\) would perform nearly identically, whereas \(\mathcal {P}_{2}\) has a slight edge, and (iii) both \(\mathcal {P}_{2}, \mathcal {P}_{3}\) are more robust than \(\mathcal {P}_{0},\mathcal {P}_{1}\). From a practical point of view, one should lean in favor of using \(W_{n}\)’s from either \(\mathcal {P}_{2}\) or \(\mathcal {P}_{3}\). More pertinent details with regard to robustness issues are found in Mukhopadhyay and Hu (2017).

Explanation of the set of notation used in Table 2:

\(n_{l}\) : sample size in \(l^{\text {th}}\) run; | |

\(\overline{n}=L^{-1}\Sigma _{l=1}^{L}n_{l}\) : should estimate \(n^{*}\) | |

\(s(\overline{n})=\left\{ (L^{2}-L)^{-1}\Sigma _{l=1}^{L}(n_{l}-\overline{n})^{2}\right\} ^{1/2}\) : estimated standard error (s.e.) of \(\overline{n}\) | |

\(s_{n_{l}}^{2}\) : sample variance from observed data \(x_{1},\ldots ,x_{n_{l}}\) in \(l^{\text {th}}\) run; | |

\(\widehat{R}_{n_{l}}=As_{n_{l}}^{2}/n_{l}+cn_{l}\) : estimated risk in \(l^{\text {th}}\) run; | |

\(\overline{R}=L^{-1}\Sigma _{l=1}^{L}\widehat{R}_{n_{l}}\) : should estimate \(R_{n^{*}}(c)\); | |

\(\widehat{\xi }=\overline{R}/R_{n^{*}}(c)\) : should estimate \(\xi (c)(\equiv \xi _{\mathcal {P}}(c))\); | |

\(s(\widehat{\xi })=\left\{ (L^{2}-L)^{-1}\Sigma _{l=1}^{L}\left( \widehat{R}_{n_{l}}-\overline{R}\right) ^{2}\right\} ^{1/2}/R_{n^{*}}(c)\) : estimated s.e. of \(\widehat{\xi }\); | |

\(\widehat{r}_{n_{l}}=c(n_{l}-n^{*})^{2}/n_{l}\) : estimated regret in \(l^{\text {th}}\) run; | |

\(\widehat{\omega }=cL^{-1}\Sigma _{l=1}^{L}\widehat{r}_{n_{l}}\) : should estimate \(\omega (c)(\equiv \omega _{\mathcal {P}}(c))\); | |

\(s(\widehat{\omega })=\left\{ (L^{2}-L)^{-1}\Sigma _{l=1}^{L}\left( \widehat{r}_{n_{l}}-\widehat{\omega }\right) ^{2}\right\} ^{1/2}\) : estimated s.e. of \(\widehat{\omega }\); | |

\(\delta ^{2}c\) : theoretical approximation of \(\omega (c)\). |

## 6 Simulations

In the spirit of Mukhopadhyay and Hu (2017), we implemented the purely sequential MRPE methodologies based on various stopping rules given by (5.3), (5.7), and (5.11) respectively under the normal case. To be more specific, we generated pseudorandom samples from a \(N\left( 5,4\right)\) population. We also fixed the weight function \(A=100\), the pilot sample size \(m=10,\) and \(\lambda =2,\) while selecting a wide range of values of *c* including 0.16, 0.04, 0.01, and 0.0025 so that the optimal sample sizes \(n^{*}\) can be determined to be 50,100,200, and 400 accordingly by (2.3).

\(n^{*}\) | 100 | \(\mathcal {P}\) | \(\overline{n}\) | \(s(\overline{n})\) | \(\widehat{\xi }\) | \(s(\widehat{\xi })\) | \(\delta ^{2}\) | \(\widehat{\omega }/c\) | \(s(\widehat{\omega })\) |
---|---|---|---|---|---|---|---|---|---|

50 | 16 | \({\mathcal {P}}_{0}\) | 50.012 | 0.1671 | 0.9880 | 0.003340 | 0.5 | 0.593131 | 0.005126 |

\({\mathcal {P}}_{1}\) | 50.199 | 0.1729 | 0.9861 | 0.003449 | 0.5 | 0.647494 | 0.007424 | ||

\({\mathcal {P}}_{2}\) | 50.313 | 0.1703 | 0.9879 | 0.003377 | 0.5113 | 0.612431 | 0.005171 | ||

\({\mathcal {P}}_{3}\) | 50.259 | 0.1778 | 0.9872 | 0.003339 | 0.5708 | 0.666431 | 0.005623 | ||

100 | 4 | \({\mathcal {P}}_{0}\) | 99.955 | 0.2408 | 0.9932 | 0.002404 | 0.5 | 0.599650 | 0.001153 |

\({\mathcal {P}}_{1}\) | 100.107 | 0.2378 | 0.9922 | 0.002373 | 0.5 | 0.581725 | 0.001146 | ||

\({\mathcal {P}}_{2}\) | 100.335 | 0.2347 | 0.9943 | 0.002306 | 0.5113 | 0.561200 | 0.001107 | ||

\({\mathcal {P}}_{3}\) | 100.332 | 0.2495 | 0.9939 | 0.002327 | 0.5708 | 0.636125 | 0.001183 | ||

200 | 1 | \({\mathcal {P}}_{0}\) | 200.012 | 0.3325 | 0.9969 | 0.001660 | 0.5 | 0.561100 | 0.000254 |

\({\mathcal {P}}_{1}\) | 200.132 | 0.3330 | 0.9962 | 0.001667 | 0.5 | 0.563800 | 0.000264 | ||

\({\mathcal {P}}_{2}\) | 200.255 | 0.3380 | 0.9971 | 0.001661 | 0.5113 | 0.580400 | 0.000264 | ||

\({\mathcal {P}}_{3}\) | 200.026 | 0.3562 | 0.9962 | 0.001659 | 0.5708 | 0.643800 | 0.000300 | ||

400 | 0.25 | \({\mathcal {P}}_{0}\) | 399.931 | 0.4588 | 0.9983 | 0.001146 | 0.5 | 0.531200 | 0.000068 |

\({\mathcal {P}}_{1}\) | 400.225 | 0.4531 | 0.9984 | 0.001132 | 0.5 | 0.520800 | 0.000069 | ||

\({\mathcal {P}}_{2}\) | 400.282 | 0.4508 | 0.9984 | 0.001114 | 0.5112 | 0.514000 | 0.000067 | ||

\({\mathcal {P}}_{3}\) | 400.232 | 0.4873 | 0.9985 | 0.001145 | 0.5708 | 0.598800 | 0.000070 |

Table 2 presents the average simulated performances obtained from \(L(=1000)\) independent replications in the construction of each row. For specific entities shown in Table 2, we have used the set of precisely defined notation explained in Table 1.

As reflected in Table 2, the average estimated sample sizes \(\overline{n}\) lie within a narrow band of the optimal sample size \(n^{*}\). Additionally, all values of \(\widehat{\xi }\), the estimated risk efficiency, found in Column 6 are close to 1, whereas larger (or smaller) the sample size (or *c*) is, the closer \(\widehat{\xi }\) is to 1. This empirically verifies the asymptotic first-order risk efficiency properties of the purely sequential MRPE methodologies (2.4), (5.3), (5.7), and (5.11).

Furthermore, the values of \(\widehat{\omega }\), the estimated regrets, shown in Column 9 are extremely close to the corresponding theoretical approximations provided in Column 8 corresponding to the sequential methodologies (2.4), (5.3), (5.7), and (5.11). This empirically validates the asymptotic second-order expansion of the regret across the purely sequential MRPE methodologies (2.4), (5.3), (5.7), and (5.11).

## 7 Some concluding thoughts

We should point out that (i) the statistic \(W_{n}\) used in (2.4) is not unbiased for \(\sigma\) but it is consistent for \(\sigma\), whereas (ii) the statistics \(W_{n}\) used in (5.3), (5.7), and (5.11) are all unbiased for \(\sigma\) and are consistent for \(\sigma\). That is a major difference between the MRPE methodologies \(\mathcal {P}_{0}\) and \(\mathcal {P}_{1}-\mathcal {P}_{3}\) for estimating the normal mean.

Next, \(W_{n}\) used in (2.4) also satisfies all conditions (C1)–(C7) leading up to the asymptotic second-order regret expansion shown in (5.1). Mukhopadhyay and Hu (2017) also proposed analogous MRPE methodologies with both GMD-based and MAD-based boundaries involving suitable \(W_{n}\) where \(W_{n}^{2}\) estimated \(\sigma ^{2}\) unbiasedly and consistently. They emphasized asymptotic first-order risk efficiency properties for their individual MRPE methodologies in the context of separate problems.

In this present work, however, the statistics \(W_{n}\) used in (5.3), (5.7), and (5.11) are all unbiased for \(\sigma\) and are consistent for \(\sigma\). We began with a general unified theory with substantial illustrations newer than those in Mukhopadhyay and Hu (2017), each giving rise to appropriate asymptotic second-order expansions of the associated regret functions.

Indeed, the earlier proposed GMD-based and MAD-based MRPE methodologies of Mukhopadhyay and Hu (2017) do enjoy the exact same second-order expansions of their associated regret functions as shown in (5.8) and (5.12). Mukhopadhyay and Hu (2017) were not in a position to claim asymptotic second-order expansions of their regret functions. Now, we have them all, by the courtesy of our general theoretical treatment and Theorem 4.1.

From Table 2, it is clear that all MRPE methodologies \(\mathcal {P}_{0}- \mathcal {P}_{3}\) have nearly comparable \(\widehat{\xi }\) and \(\widehat{ \omega }\) values, however, the newer MRPE methodologies \(\mathcal {P}_{2}- \mathcal {P}_{3}\) may be associated with very marginally larger regret values compared to those of \(\mathcal {P}_{0}-\mathcal {P}_{1}\). On a positive note, our newer MRPE methodologies \(\mathcal {P}_{2}-\mathcal {P}_{3}\) tend to be more robust under possibilities of observing outliers than the MRPE methodologies \(\mathcal {P}_{0}-\mathcal {P}_{1}\).

## Notes

### Acknowledgements

The comments received from two anonymous reviewers, the Associate Editor, and the Executive Editor on our earlier version have genuinely helped us in preparing this revised manuscript. We express our gratitude to all of them and thank them.

### Compliance with ethical standards

### Conflict of interest statement

On behalf of all authors, the corresponding author states that there is no conflict of interest.

## References

- Abramowitz, M., & Stegun, I. A. (1972).
*Handbook of mathematical functions, ninth printing*. New York: Dover.zbMATHGoogle Scholar - Anscombe, F. J. (1950). Sampling theory of the negative binomial and logarithmic series distributions.
*Biometrika*,*37*, 358–382.MathSciNetCrossRefzbMATHGoogle Scholar - Anscombe, F. J. (1952). Large-sample theory of sequential estimation.
*Proceedings of Cambridge Philosophical Society*,*48*, 600–607.MathSciNetCrossRefzbMATHGoogle Scholar - Anscombe, F. J. (1953). Sequential estimation.
*Journal of Royal Statistical Society, Series B*,*15*, 1–29.MathSciNetzbMATHGoogle Scholar - Aoshima, M., & Mukhopadhyay, N. (2002). Two-stage estimation of a linear function of normal means with second-order approximations.
*Sequential Analysis*,*21*, 109–144.MathSciNetCrossRefzbMATHGoogle Scholar - Babu, G. J., & Rao, C. R. (1992). Expansions for statistics involving the mean absolute deviations.
*Annals of Institute of Statistical Mathematics*,*44*, 387–403.MathSciNetCrossRefzbMATHGoogle Scholar - Basu, D. (1955). On statistics independent of a complete sufficient statistic.
*Sankhyā*,*15*, 377–380.MathSciNetzbMATHGoogle Scholar - Carroll, R. J. (1977). On the asymptotic normality of stopping times based on robust estimators.
*Sankhyā, Series A*,*39*, 355–377.MathSciNetzbMATHGoogle Scholar - Chattopadhyay, B., & Mukhopadhyay, N. (2013). Two-stage fixed-width confidence intervals for a normal mean in the presence of suspect outliers.
*Sequential Analysis*,*32*, 134–157.MathSciNetCrossRefzbMATHGoogle Scholar - Chow, Y. S., & Martinsek, A. T. (1982). Bounded regret of a sequential procedure for estimation of the mean.
*Annals of Statistics*,*10*, 909–914.MathSciNetCrossRefzbMATHGoogle Scholar - Chow, Y. S., & Robbins, H. (1965). On the asymptotic theory of fixed-width sequential confidence intervals for the mean.
*Annals of Mathematical Statistics*,*36*, 457–462.MathSciNetCrossRefzbMATHGoogle Scholar - Chow, Y. S., & Yu, K. F. (1981). The performance of a sequential procedure for the estimation of the mean.
*Annals of Statistics*,*9*, 184–188.MathSciNetCrossRefzbMATHGoogle Scholar - Ghosh, B. K., & Sen, P. K. (1991).
*Handbook of sequential analysis, edited volume*. New York: Dekker.Google Scholar - Ghosh, M., & Mukhopadhyay, N. (1975). Asymptotic normality of stopping times in sequential analysis, unpublished manuscript, Indian Statistical Institute, Calcutta, India.Google Scholar
- Ghosh, M., & Mukhopadhyay, N. (1976). On Two fundamental problems of sequential estimation.
*Sankhyā, Series B*,*38*, 203–218.MathSciNetzbMATHGoogle Scholar - Ghosh, M., & Mukhopadhyay, N. (1979). Sequential point estimation of the mean when the distribution is unspecified.
*Communications in Statistics-Theory & Methods, Series A*,*8*, 637–652.MathSciNetCrossRefzbMATHGoogle Scholar - Ghosh, M., & Mukhopadhyay, N. (1980). Sequential point estimation of the difference of two normal means.
*Annals of Statistics*,*8*, 221–225.MathSciNetCrossRefzbMATHGoogle Scholar - Ghosh, M., & Mukhopadhyay, N. (1981). Consistency and asymptotic efficiency of two-stage and sequential procedures.
*Sankhyā, Series A*,*43*, 220–227.MathSciNetzbMATHGoogle Scholar - Ghosh, M., Mukhopadhyay, N., & Sen, P. K. (1997).
*Sequential estimation*. New York: Wiley.CrossRefzbMATHGoogle Scholar - Gini, C. (1914). Sulla Misura della Concertrazione e della Variabilit dei Caratteri.
*Atti del Reale Istituto Veneto di Scienze, Lettere ed Arti*,*73*, 1203–1248.Google Scholar - Gini, C. (1921). Measurement of inequality of incomes.
*Economic Journal*,*31*, 124–126.CrossRefGoogle Scholar - Gut, A. (2012). Anscombe’s theorem 60 years later.
*Sequential Analysis*,*31*, 368–396.MathSciNetzbMATHGoogle Scholar - Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution.
*Annals of Mathematical Statistics*,*19*, 293–325.MathSciNetCrossRefzbMATHGoogle Scholar - Hoeffding, W. (1961). The strong law of large numbers for u-statistics. Institute of Statistics Mimeo Series #302. University of North Carolina, Chapel Hill.Google Scholar
- Jurečkovā, J., & Sen, P. K. (1996).
*Robust Statistical procedures*. New York: Wiley.zbMATHGoogle Scholar - Lai, T. L., & Siegmund, D. (1977). A nonlinear renewal theory with applications to sequential analysis I.
*Annals of Statistics*,*5*, 946–954.MathSciNetCrossRefzbMATHGoogle Scholar - Lai, T. L., & Siegmund, D. (1979). A nonlinear renewal theory with applications to sequential analysis II.
*Annals of Statistics*,*7*, 60–76.MathSciNetCrossRefzbMATHGoogle Scholar - Lee, A. J. (1990).
*U-statistics. Theory and practice*. New York: Dekker.zbMATHGoogle Scholar - Mukhopadhyay, N. (1975).
*Sequential methods in estimation and prediction*, Ph.D. dissertation, Indian Statistical Institute, Calcutta, India.Google Scholar - Mukhopadhyay, N. (1978). Sequential point estimation of the mean when the distribution is unspecified, Statistics Technical Report Number 312. University of Minnesota, Minneapolis.Google Scholar
- Mukhopadhyay, N. (1982). Stein’s Two-Stage Procedure and Exact Consistency,
*Scandinavian Actuarial Journal*110–122.Google Scholar - Mukhopadhyay, N. (1988). Sequential estimation problems for negative exponential populations.
*Communications in Statistics-Theory & Methods, Series A*,*17*, 2471–2506.MathSciNetCrossRefzbMATHGoogle Scholar - Mukhopadhyay, N. (2000).
*Probability and statistical inference*. New York: Dekker.zbMATHGoogle Scholar - Mukhopadhyay, N., & Chattopadhyay, B. (2012). A tribute to frank anscombe and random central limit theorem from 1952.
*Sequential Analysis*,*31*, 265–277.MathSciNetzbMATHGoogle Scholar - Mukhopadhyay, N., Datta, S., & Chattopadhyay, S. (2004).
*Applied sequential methodologies, edited volume*. New York: Dekker.CrossRefGoogle Scholar - Mukhopadhyay, N., & de Silva, B. M. (2009).
*Sequential methods and their applications*. Boca Ratton: CRC.zbMATHGoogle Scholar - Mukhopadhyay, N., & Hu, J. (2017). Confidence Intervals and point estimators for a normal mean under purely sequential strategies involving Gini’s mean difference and mean absolute deviation.
*Sequential Analysis*,*36*, 210–239.MathSciNetCrossRefzbMATHGoogle Scholar - Mukhopadhyay, N., & Hu, J. (2018). Gini’s mean difference and mean absolute deviation based two-stage estimation for a normal mean with known lower bound of variance.
*Sequential Analysis*,*37*, 204–221.MathSciNetCrossRefzbMATHGoogle Scholar - Mukhopadhyay, N., & Solanky, T. K. S. (1994).
*Multistage selection and ranking procedures*. New York: Dekker.zbMATHGoogle Scholar - Ray, W. D. (1957). Sequential confidence intervals for the mean of a normal population with unknown variance.
*Journal of Royal Statistical Society, Series B*,*19*, 133–143.MathSciNetzbMATHGoogle Scholar - Robbins, H. (1959). Sequential estimation of the mean of a normal population. In Ulf Grenander (Ed.),
*Probability and statistics, H Cramér volume*(pp. 235–245). Uppsala: Almquist & Wiksell.Google Scholar - Sen, P. K. (1981).
*Sequential nonparametrics: invariance principles and statistical inference*. New York: Wiley.zbMATHGoogle Scholar - Sen, P. K. (1985).
*Theory and applications of sequential nonparametrics, CBMS #49*. Philadelphia: SIAM.CrossRefGoogle Scholar - Sen, P. K., & Ghosh, M. (1981). Sequential point estimation of estimable parameters based on U-statistics.
*Sankhyā, Series A*,*43*, 331–344.MathSciNetzbMATHGoogle Scholar - Siegmund, D. (1985).
*Sequential analysis: Tests and confidence intervals*. New York: Springer.CrossRefzbMATHGoogle Scholar - Starr, N. (1966). On the asymptotic efficiency of a sequential procedure for estimating the mean.
*Annals of Mathematical Statistics*,*37*, 1173–1185.MathSciNetCrossRefzbMATHGoogle Scholar - Starr, N., & Woodroofe, M. (1969). Remarks on sequential point estimation.
*Proceedings of National Academy of Sciences*,*63*, 285–288.MathSciNetCrossRefzbMATHGoogle Scholar - Wiener, N. (1939). The Ergodic theorem.
*Duke Mathematical Journal*,*5*, 1–18.MathSciNetCrossRefzbMATHGoogle Scholar - Woodroofe, M. (1977). Second order approximations for sequential point and interval estimation.
*Annals of Statistics*,*5*, 984–995.MathSciNetCrossRefzbMATHGoogle Scholar - Woodroofe, M. (1982).
*Nonlinear renewal theory in sequential analysis, CBMS lecture notes #39*. Philadelphia: SIAM.CrossRefGoogle Scholar - Zacks, S. (2009).
*Stage-wise adaptive designs*. New York: Wiley.CrossRefzbMATHGoogle Scholar - Zacks, S. (2017).
*Sample path analysis and distributions of boundary crossing times, lecture notes in mathematics*. New York: Springer.CrossRefzbMATHGoogle Scholar