Skip to main content

Estimation of a Truncation Parameter for a Two-Sided TEF

  • Chapter
  • First Online:
Statistical Estimation for Truncated Exponential Families

Part of the book series: SpringerBriefs in Statistics ((JSSRES))

  • 716 Accesses

Abstract

The corresponding results on maximum likelihood estimation of a truncation parameter together with a bias-adjustment to the case of oTEF in the previous chapter are obtained in the case of a two-sided truncated exponential family (tTEF) of distributions with two truncation parameters \(\gamma \) and \(\nu \) and a natural parameter \(\theta \) as a nuisance parameter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Akahira, M., & Ohyauchi, N. (2016). Second order asymptotic loss of the MLE of a truncation parameter for a two-sided truncated exponential family of distributions. Journal of the Japan Statistical Society, 46, 27–50.

    Google Scholar 

  • Zhang, J. (2013). Reducing bias of the maximum-likelihood estimation for the truncated Pareto distribution. Statistics, 47, 792–799.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masafumi Akahira .

Appendix D

Appendix D

Before proving Theorems 5.3.1 and 5.4.2, we prepare a lemma.

Lemma 5.9.1

Let \(\hat{U}=\sqrt{\lambda _{2} n} (\hat{\theta }_{ML}-\theta )\). Then, the asymptotic expectation of \(\hat{U}\) and \(\hat{U}^{2}\) is given by

$$\begin{aligned}&E_{\theta ,\gamma ,\nu }(\hat{U}) = -\frac{1}{\sqrt{\lambda _{2} n} } \left\{ \frac{1}{k} \left( \frac{\partial \lambda _{1}}{\partial \gamma } \right) -\frac{1}{\tilde{k}} \left( \frac{\partial \lambda _{1}}{\partial \nu } \right) +\frac{\lambda _{3}}{2\lambda _{2}} \right\} + O\left( \frac{1}{n\sqrt{n}} \right) ,\end{aligned}$$
(5.45)
$$\begin{aligned}&E_{\theta ,\gamma ,\nu }(\hat{U}^{2}) = 1+ O\left( \frac{1}{n} \right) , \end{aligned}$$
(5.46)

where \(\lambda _{i}=\lambda _{i}(\theta ,\gamma ,\nu )\ (j=1,2,3)\), \(k=k(\theta ,\gamma ,\nu )\), and \(\tilde{k}=\tilde{k}(\theta ,\gamma ,\nu )\).

The Eqs. (5.45) and (5.46) are obtained as (3.28) and (3.31), respectively.

The proof of Theorem 5.3.1 By the Taylor expansion, we have

$$\begin{aligned} \frac{1}{\hat{\tilde{k}}_{\theta ,\gamma }} =\frac{1}{\tilde{k}(\theta ,\gamma ,X_{(n)})} = \frac{1}{\tilde{k}(\theta ,\gamma ,\nu )} \left[ 1- \left\{ \frac{\partial }{\partial \nu } \log {\tilde{k}(\theta ,\gamma ,\nu )} \right\} \frac{T_{(n)}}{n} +O_{p}\left( \frac{1}{n^2} \right) \right] . \end{aligned}$$
(5.47)

Substituting (5.47) into (5.4), we obtain from (5.3)

$$\begin{aligned} X_{(n)}^{*} = X_{(n)} +\frac{1}{\tilde{k} n} + \frac{1}{n^2} \tilde{A} -\frac{1}{\tilde{k} n^2} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) \left( T_{(n)} + \frac{1}{\tilde{k}} \right) + O_{p}\left( \frac{1}{n^3} \right) , \end{aligned}$$

where \(\tilde{k}=\tilde{k}(\theta ,\gamma ,\nu )\) and \(\tilde{A}=\tilde{A}(\theta ,\gamma ,\nu )\), hence by (5.3)

$$\begin{aligned} T_{(n)}^{*} = n(X_{(n)}^{*} -\nu )&= T_{(n)} +\frac{1}{\tilde{k}} + \frac{1}{n} \tilde{A} -\frac{1}{\tilde{k} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) \left( T_{(n)} + \frac{1}{\tilde{k}} \right) + O_{p}\left( \frac{1}{n^2} \right) \\&= T_{(n)} +\frac{1}{\tilde{k}} -\frac{1}{\tilde{k} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) T_{(n)} + O_{p}\left( \frac{1}{n^2} \right) . \end{aligned}$$

Thus, we get (5.5). From (3.17), (3.19) in Appendix B1, and (5.3), it follows that the second-order asymptotic mean and variance of \(T_{(n)}^{*}\) are

$$\begin{aligned} E_{\nu }(T_{(n)}^{*}) = O\left( \frac{1}{n^2} \right) , \quad V_{\nu }(T_{(n)}^{*}) = \frac{1}{\tilde{k}^{2}} +\frac{2}{\tilde{k}^{3} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) + O\left( \frac{1}{n^2} \right) , \end{aligned}$$

hence, we get (5.7). Thus, we complete the proof.

The proof of Theorem 5.4.1 Since \(\delta =-\nu \), \(\eta =-\gamma \), and \(X_{(n)}=-Y_{(1)}\), it follows from (5.14) and (5.15) that \(\hat{k}_{0\eta }=\hat{\tilde{k}}_\gamma \), \(\hat{\lambda }^0_{\eta j}=\hat{\lambda }_{\gamma j}\ (j=2, 3)\), \(\partial ^j\hat{k}_{0\eta }/\partial \theta ^j=\partial ^j\hat{\tilde{k}}_\gamma /\partial \theta ^j\ (j=1, 2)\), and \(\partial \hat{\lambda }_{\eta 1}^0/\partial \delta = -(\partial \hat{\lambda }_{\gamma 1}/\partial \nu )\), hence, letting \(X_{(n)}^{\dagger }=-Y'_{(1)}\), we have from (5.16)

$$\begin{aligned} X_{(n)}^{\dagger }=X_{(n)}&+\frac{1}{\hat{\tilde{k}}_\gamma n} +\frac{1}{\hat{\tilde{k}}_\gamma ^2\hat{\lambda }_{\gamma 2}n^2} \left( \frac{\partial \hat{\tilde{k}}_\gamma }{\partial \theta }\right) \left\{ \frac{1}{\hat{\tilde{k}}_\gamma } \left( \frac{\partial \hat{\lambda }_{\gamma 1}}{\partial \nu }\right) -\frac{\hat{\lambda }_{\gamma 3}}{2\hat{\lambda }_{\gamma 2}}\right\} \\&+\frac{1}{2\hat{\tilde{k}}_\gamma ^2\hat{\lambda }_{\gamma 2}n^2} \left\{ \frac{\partial ^2\hat{\tilde{k}}_\gamma }{\partial \theta ^2} -\frac{2}{\hat{\tilde{k}}_\gamma } \left( \frac{\partial \hat{\tilde{k}}_\gamma }{\partial \theta }\right) ^2 \right\} , \end{aligned}$$

which coincides with (5.20), where

$$\begin{aligned}&\hat{\tilde{k}}_\gamma =\tilde{k}_\gamma \left( \hat{\theta }_{ML}, X_{(n)}\right) =\tilde{k}\left( \hat{\theta }_{ML}, \gamma , X_{(n)}\right) ,\\&\partial ^j \hat{\tilde{k}}_\gamma /\partial \theta ^j= \left( \partial ^j /\partial \theta ^j\right) \tilde{k}_\gamma \left( \hat{\theta }_{ML}, X_{(n)}\right) = \left( \partial ^j /\partial \theta ^j\right) \tilde{k}\left( \hat{\theta }_{ML}, \gamma , X_{(n)}\right) \ \ (j=1, 2),\\&\hat{\lambda }_{\gamma j}=\lambda _{\gamma j} \left( \hat{\theta }_{ML}, X_{(n)}\right) = \lambda _{j} \left( \hat{\theta }_{ML}, \gamma , X_{(n)}\right) \ \ (j=2, 3),\\&\partial \hat{\lambda }_{\gamma 1}/\partial \nu = \left( \partial /\partial \nu \right) \lambda _{\gamma 1}\left( \hat{\theta }_{ML}, X_{(n)}\right) = \left( \partial /\partial \nu \right) \lambda _{1}\left( \hat{\theta }_{ML}, \gamma , X_{(n)}\right) , \end{aligned}$$

and also from (5.17)

$$\begin{aligned} T_{(n)}^{\dagger }= T_{(n)}&+ \frac{1}{\tilde{k}_{\gamma }} - \frac{1}{\tilde{k}_{\gamma }^{2} \sqrt{\lambda _{\gamma 2} n} } \left( \frac{\partial \tilde{k}_{\gamma }}{\partial \theta } \right) \left[ \hat{U}_{\gamma } + \frac{1}{\sqrt{\lambda _{\gamma 2} n} } \left\{ -\frac{1}{\tilde{k}_{\gamma }}\left( \frac{\partial \lambda _{\gamma 1}}{\partial \nu } \right) + \frac{\lambda _{\gamma 3}}{2 \lambda _{\gamma 2} } \right\} \right] \\&-\frac{1}{\tilde{k}^2_{\gamma } n}\left( \frac{\partial \tilde{k}_{\gamma }}{\partial \nu } \right) T_{(n)} -\frac{1}{2 \tilde{k}_{\gamma }^{2} \lambda _{\gamma 2} n } \left\{ \frac{\partial ^2 \tilde{k}_{\gamma }}{\partial \theta ^2} - \frac{2}{\tilde{k}_{\gamma }} \left( \frac{\partial \tilde{k}_{\gamma }}{\partial \theta } \right) ^{2} \right\} \left( \hat{U}_\gamma ^{2} - 1\right) \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + O_{p}\left( \frac{1}{n \sqrt{n}} \right) , \end{aligned}$$

hence, (5.21) holds. Since \(T'_{(1)}=n(Y'_{(1)}-\delta )=-n(X_{(n)}-\nu )=-T_{(n)}\), \((\partial /\partial \delta )\log k_{0\eta } =-(\partial /\partial \nu )\log \tilde{k}_\gamma \) by (5.14) and \(u_0(\delta )=u(\nu )\), it follows from (5.15), (5.18), and (5.19) that (5.22) and (5.23) hold. Thus, we complete the proof.

The proof of Theorem 5.5.1 Putting \(T_{(1)}:=n(X_{(1)}-\gamma )\), \(T_{(n)}:=n(X_{(n)}-\nu )\), and

$$\begin{aligned} Z_{1}:= \frac{1}{\sqrt{\lambda _{2}(\theta ,\gamma ,\nu ) n} } \sum _{i=1}^{n} \{ u(X_{i}) -\lambda _{1}(\theta ,\gamma ,\nu ) \}, \end{aligned}$$

we have from (3.7) in Theorem 3.4.1

$$\begin{aligned} \hat{U} = Z_{1} - \frac{\lambda _{3}}{2 \lambda _{2}^{3/2} \sqrt{n}} Z_{1}^{2} - \frac{1}{\sqrt{\lambda _{2} n}} \left\{ \left( \frac{\partial \lambda _{1}}{\partial \gamma } \right) T_{(1)} + \left( \frac{\partial \lambda _{1}}{\partial \nu } \right) T_{(n)} \right\} +O_{p}\left( \frac{1}{n} \right) . \end{aligned}$$
(5.48)

By the Taylor expansion, we have

$$\begin{aligned}&\tilde{k}(\hat{\theta }_{ML},X_{(1)},X_{(n)}) \\&= \tilde{k} + \frac{1}{\sqrt{\lambda _{2} n}} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) \hat{U} +\frac{1}{n} \left( \frac{\partial \tilde{k}}{\partial \gamma } \right) T_{(1)} + \frac{1}{n} \left( \frac{\partial \tilde{k}}{\partial \nu } \right) T_{(n)} +\frac{1}{2 \lambda _{2} n} \left( \frac{\partial ^2 \tilde{k}}{\partial \theta ^2} \right) \hat{U}^{2} \\&+ O_{p}\left( \frac{1}{n\sqrt{n}} \right) , \end{aligned}$$

where \(\tilde{k}=\tilde{k}(\theta ,\gamma ,\nu )\), \(\partial \tilde{k}/ \partial \theta =(\partial / \partial \theta ) \tilde{k}(\theta ,\gamma ,\nu )\), \(\partial \tilde{k}/ \partial \gamma =(\partial / \partial \gamma ) \tilde{k}(\theta ,\gamma ,\nu )\), \(\partial \tilde{k}/ \partial \nu =(\partial / \partial \nu ) \tilde{k}(\theta ,\gamma ,\nu )\), and \(\partial ^2 \tilde{k}/ \partial \theta ^2 =(\partial ^2/ \partial \theta ^2) \tilde{k}(\theta ,\gamma ,\nu )\). Since

$$\begin{aligned}&\frac{1}{\hat{\tilde{k}}} = \frac{1}{\tilde{k}(\hat{\theta }_{ML},X_{(1)},X_{(n)})}\nonumber \\&= \frac{1}{\tilde{k}} -\frac{1}{\tilde{k}^{2}\sqrt{\lambda _{2} n}} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) \hat{U} -\frac{1}{\tilde{k}^{2} n} \left( \frac{\partial \tilde{k}}{\partial \gamma } \right) T_{(1)} - \frac{1}{\tilde{k}^{2} n} \left( \frac{\partial \tilde{k}}{\partial \nu } \right) T_{(n)}- \frac{1}{2 \tilde{k}^{2} \lambda _{2} n} \left( \frac{\partial ^2 \tilde{k}}{\partial \theta ^2} \right) \hat{U}^{2} \nonumber \\&+ \frac{1}{ \tilde{k}^{3} \lambda _{2} n} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) ^{2} \hat{U}^{2} + O_{p}\left( \frac{1}{n\sqrt{n}} \right) ,\\&\hat{k}=k(\hat{\theta }_{ML},X_{(1)},X_{(n)})=k(\theta ,\gamma ,\nu )+ O_{p}\left( \frac{1}{\sqrt{n}} \right) =k+ O_{p}\left( \frac{1}{\sqrt{n}} \right) ,\nonumber \\&\hat{\lambda }_{j}= \lambda _{j}(\hat{\theta }_{ML},X_{(1)},X_{(n)})= \lambda _{j}(\theta ,\gamma ,\nu ) + O_{p}\left( \frac{1}{\sqrt{n}} \right) =\lambda _{j}+ O_{p}\left( \frac{1}{\sqrt{n}} \right) \quad (j=1,2,3),\nonumber \\&\frac{\partial \hat{\lambda }_1}{\partial \gamma }= \frac{\partial \lambda _1}{\partial \gamma }\left( \hat{\theta }_{ML}, X_{(1)}, X_{(n)}\right) = \frac{\partial \lambda _1}{\partial \gamma }(\theta , \gamma , \nu )+O_p\left( \frac{1}{\sqrt{n}}\right) =\frac{\partial \lambda _1}{\partial \gamma }+O_p\left( \frac{1}{\sqrt{n}}\right) ,\nonumber \\&\frac{\partial \hat{\lambda }_{1}}{\partial \nu } = \frac{\partial \lambda _{1}}{\partial \nu }(\hat{\theta }_{ML},X_{(1)},X_{(n)}) = \frac{\partial \lambda _{1}}{\partial \nu }(\theta ,\gamma ,\nu ) +O_{p}\left( \frac{1}{\sqrt{n}} \right) =\frac{\partial \lambda _{1}}{\partial \nu }+O_{p}\left( \frac{1}{\sqrt{n}} \right) , \nonumber \\&\frac{\partial \hat{\tilde{k}}}{\partial \gamma } =\frac{\partial \tilde{k}}{\partial \gamma }(\hat{\theta }_{ML},X_{(1)},X_{(n)})= \frac{\partial \tilde{k}}{\partial \gamma }(\theta ,\gamma ,\nu ) +O_{p}\left( \frac{1}{\sqrt{n}} \right) =\frac{\partial \tilde{k}}{\partial \gamma }+O_{p}\left( \frac{1}{\sqrt{n}} \right) , \nonumber \\&\frac{\partial ^{j}\hat{\tilde{k}}}{\partial \theta ^{j}}= \frac{\partial ^{j} \tilde{k}}{\partial \theta ^{j}}(\hat{\theta }_{ML},X_{(1)},X_{(n)}) =\frac{\partial ^{j} \tilde{k}}{\partial \theta ^{j}}(\theta ,\gamma ,\nu ) +O_{p}\left( \frac{1}{\sqrt{n}} \right) =\frac{\partial ^{j} \tilde{k}}{\partial \theta ^{j}} +O_{p}\left( \frac{1}{\sqrt{n}} \right) \nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (j=1,2),\nonumber \end{aligned}$$

substituting them into (5.25), we have

$$\begin{aligned} T_{(n)}^{**}=\,&n(X_{(n)}^{**}-\nu )\nonumber \\ =\,&T_{(n)} +\frac{1}{\tilde{k}} -\frac{1}{\tilde{k}^2 \sqrt{\lambda _{2} n}} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) \left[ \hat{U} +\frac{1}{\sqrt{\lambda _{2} n}} \left\{ \frac{1}{k} \left( \frac{\partial \lambda _{1}}{\partial \gamma } \right) -\frac{1}{\tilde{k}} \left( \frac{\partial \lambda _{1}}{\partial \nu } \right) +\frac{\lambda _{3}}{2\lambda _{2}} \right\} \right] \nonumber \\&- \frac{1}{\tilde{k} n} \left( \frac{\partial }{\partial \gamma } \log {\tilde{k}} \right) \left( T_{(1)} -\frac{1}{k} \right) -\frac{1}{\tilde{k} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) T_{(n)} \nonumber \\&- \frac{1}{2 \tilde{k}^{2} \lambda _{2} n} \left\{ \frac{\partial ^2 \tilde{k}}{\partial \theta ^2}- \frac{2}{\tilde{k}} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) ^{2} \right\} (\hat{U}^{2} -1)+O_{p}\left( \frac{1}{n\sqrt{n}} \right) , \end{aligned}$$
(5.49)

which shows that (5.26) holds. From (5.26) and Lemmas 3.9.1 and 5.9.1, we obtain (5.27). Since \(T_{(1)}\) and \(T_{(n)}\) are asymptotically independent, it follows from (3.16) and (3.17) that

$$\begin{aligned} E_{\theta ,\gamma ,\nu } \left[ T_{(1)} T_{(n)}\right] =-\frac{1}{k \tilde{k}}+ O\left( \frac{1}{n} \right) . \end{aligned}$$
(5.50)

By (5.48), (5.50), and Lemma 5.9.1, we obtain

$$\begin{aligned} E_{\theta ,\gamma ,\nu } \left[ \left( T_{(n)}+\frac{1}{\tilde{k}} \right) \hat{U} \right] =O\left( \frac{1}{n} \right) , \end{aligned}$$
(5.51)

since \(\partial \lambda _{1}/ \partial \nu = \tilde{k} (u(\nu ) - \lambda _{1})\). Since \(E_{\theta ,\gamma ,\nu }(Z_{1}^{2}| T_{(n)})=1+O_{p}(1/n)\) by (2.59), it follows from (3.17) and (5.48) that

$$\begin{aligned} E_{\theta ,\gamma ,\nu } \left[ \left( T_{(n)}+\frac{1}{\tilde{k}} \right) (\hat{U}^{2} - 1) \right]&= O\left( \frac{1}{n\sqrt{n}} \right) . \end{aligned}$$
(5.52)

By (5.49)–(5.52), Lemmas 3.9.1, and 5.9.1 and (5.3), we have

$$\begin{aligned}&V_{\theta ,\gamma ,\nu } (T_{(n)}^{**})\\&= \left\{ 1- \frac{2}{\tilde{k} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) \right\} E_{\theta ,\gamma ,\nu } \left[ \left( T_{(n)} +\frac{1}{\tilde{k}} \right) ^2 \right] + \frac{1}{\tilde{k}^{4} \lambda _{2} n} \left( \frac{\partial \tilde{k}}{\partial \theta } \right) ^{2} E_{\theta ,\gamma ,\nu } (\hat{U}^{2}) \\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + O\left( \frac{1}{n \sqrt{n}} \right) \\&= \frac{1}{\tilde{k}^{2}} + \frac{2}{\tilde{k}^{3} n} \left( \frac{\partial }{\partial \nu } \log {\tilde{k}} \right) + \frac{1}{\tilde{k}^{2} \lambda _{2} n} \{u(\nu ) - \lambda _{1} \}^2 + O\left( \frac{1}{n \sqrt{n}}\right) , \end{aligned}$$

since

$$\begin{aligned} \frac{\partial }{\partial \theta } \log {\tilde{k}} = u(\nu ) - \frac{\partial }{\partial \theta } \log {b(\theta ,\gamma ,\nu )} = u(\nu ) -\lambda _{1}, \end{aligned}$$

which shows that (5.28) holds. Thus, we complete the proof.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 The Author(s)

About this chapter

Cite this chapter

Akahira, M. (2017). Estimation of a Truncation Parameter for a Two-Sided TEF. In: Statistical Estimation for Truncated Exponential Families. SpringerBriefs in Statistics(). Springer, Singapore. https://doi.org/10.1007/978-981-10-5296-5_5

Download citation

Publish with us

Policies and ethics