Skip to main content

Influence Functions and Efficiencies of k-Step Hettmansperger–Randles Estimators for Multivariate Location and Regression

  • Conference paper
  • First Online:
Book cover Robust Rank-Based and Nonparametric Methods

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 168))

Abstract

In Hettmansperger and Randles (Biometrika 89:851–860, 2002) spatial sign vectors were used to derive simultaneous estimators of multivariate location and shape. Oja (Multivariate nonparametric methods with R. Springer, New York, 2010) proposed a similar approach for the multivariate linear regression case. These estimators are highly robust and have under general assumptions a joint limiting multinormal distribution. The estimates are easy to compute using fixed-point algorithms. There are however no exact proofs for the convergence of these algorithms. The existence and uniqueness of the solutions also still remain unproven although we believe that they hold under general conditions. To circumvent these problems, we consider in this paper k-step versions of Hettmansperger and Randles (HR) location and shape estimators and their extensions to the linear regression problem. The influence functions, limiting distributions and asymptotical efficiencies of the estimators are derived at the multivariate elliptical case.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Arcones, M. A. (1998). Asymptotic theory for M-estimators over a convex kernel. Economic Theory, 14, 387–422.

    MathSciNet  Google Scholar 

  • Bai, Z. D., Chen, R., Miao, B. Q., & Rao, C. R. (1990). Asymptotic theory of least distances estimate in the multivariate linear models. Statistics, 21, 503–519.

    Article  MathSciNet  MATH  Google Scholar 

  • Brown, B. M. (1983). Statistical uses of the spatial median. Journal of the Royal Statistical Society, Series B, 45, 25–30.

    MathSciNet  MATH  Google Scholar 

  • Chakraborty, B., Chaudhuri, P., & Oja, H. (1998). Operating transformation retransformation on spatial median and angle test. Statistica Sinica, 8, 767–784.

    MathSciNet  MATH  Google Scholar 

  • Croux, C., Dehon, C., & Yadine, A. (2010). The k-step spatial sign covariance matrix. Advances in Data Analysis and Classification, 4, 137–150.

    Article  MathSciNet  MATH  Google Scholar 

  • Davies, P. L. (1987). Asymptotic behaviour of S-estimates of multivariate location and dispersion matrices. Annals of Statistics, 15, 1269–1292.

    Article  MathSciNet  MATH  Google Scholar 

  • Dümbgen, L., & Tyler, D. (2005). On the breakdown properties of some multivariate M-functionals. Scandinavian Journal of Statistics, 32, 247–264.

    Article  MathSciNet  MATH  Google Scholar 

  • Fang, K. T., Kotz, S., & Ng, K. W. (1990). Symmetric multivariate and related distributions. London: Chapman and Hall.

    Book  MATH  Google Scholar 

  • Frahm, G. (2009). Asymptotic distributions of robust shape matrices and scales. Journal of Multivariate Analysis, 100, 1329–1337.

    Article  MathSciNet  MATH  Google Scholar 

  • Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J., & Stahel, W. J. (1986). Robust statistics: The approach based on influence functions. New York: Wiley.

    MATH  Google Scholar 

  • Hettmansperger, T. P., & McKean, J. W. (2011). Robust nonparametric statistical methods (2nd ed.). London: Arnold.

    MATH  Google Scholar 

  • Hettmansperger, T. P., & Randles, R. H. (2002). A practical affine equivariant multivariate median. Biometrika, 89, 851–860.

    Article  MathSciNet  MATH  Google Scholar 

  • Ilmonen, P., Serfling, R., & Oja, H. (2012). Invariant coordinate selection (ICS) functionals. International Statistical Review, 80, 93–110.

    Article  MathSciNet  Google Scholar 

  • Locantore, N., Marron, J. S., Simpson, D. G., Tripoli, N., Zhang, J. T., Cohen, K. L. (1999). Robust principal components for functional data. Test, 8, 1–28.

    Article  MathSciNet  MATH  Google Scholar 

  • Marden, J. I. (1999). Some robust estimates of principal components. Statistics & Probability Letters, 43, 349–359.

    Article  MathSciNet  MATH  Google Scholar 

  • Maronna, R. A. (1976). Robust M-estimators of multivariate location and scatter. Annals of Statistics, 4, 51–67.

    Article  MathSciNet  MATH  Google Scholar 

  • Oja, H. (1999). Affine invariant multivariate sign and rank tests and corresponding estimates: A review. Scandinavian Journal of Statistics, 26, 319–343.

    Article  MathSciNet  MATH  Google Scholar 

  • Oja, H. (2010). Multivariate nonparametric methods with R. New York: Springer.

    Book  MATH  Google Scholar 

  • Ollila, E., Hettmansperger, T. P., Oja, H. (2004). Affine equivariant multivariate sign methods. University of Jyvaskyla. Technical report.

    Google Scholar 

  • Paindaveine, D. (2008). A canonical definition of shape. Statistics & Probability Letters, 78, 2240–2247.

    Article  MathSciNet  MATH  Google Scholar 

  • Puri, M. L., & Sen, P. K. (1971). Nonparametric methods in multivariate analysis. New York: Wiley.

    MATH  Google Scholar 

  • Taskinen, S., Sirkiä, S., & Oja, H. (2010). k-Step estimators of shape based on spatial signs and ranks. Journal of Statistical Planning and Inference, 140, 3376–3388.

    Article  MathSciNet  MATH  Google Scholar 

  • Tyler, D., Critchley, F., Dümbgen, L., & Oja, H. (2009). Invariant coordinate selection. Journal of Royal Statistical Society B, 71, 549–592.

    Article  MATH  Google Scholar 

  • Tyler, D. E. (1987). A distribution-free M-estimator of multivariate scatter. Annals of Statistics, 15, 234–251.

    Article  MathSciNet  MATH  Google Scholar 

  • Visuri, S., Oja, H., & Koivunen, V. (2000). Sign and rank covariance matrices. Journal of Statistical Planning and Inference, 91, 557–575.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors wish to thank a referee for several helpful comments and suggestions. The Research was funded by the Academy of Finland (grants 251965 and 268703).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hannu Oja .

Editor information

Editors and Affiliations

Appendix

Appendix

Proof (Theorem 11.1).

The functional (11.7) solves

$$\displaystyle{ E_{F}\left [\frac{\mathbf{y} -\boldsymbol{\mu }_{k}(F)} {\vert \vert \mathbf{z}\vert \vert } \right ] = \mathbf{0}, }$$
(11.15)

where \(\mathbf{z} = \mathbf{V}_{k-1}^{-1/2}(F)(\mathbf{y} -\boldsymbol{\mu }_{k-1}(F))\). Write \(F_{\epsilon } = (1-\epsilon )F_{0} +\epsilon \varDelta _{\mathbf{y}_{0}}\). Then

$$\displaystyle{\boldsymbol{\mu }_{k-1}(F_{\epsilon }) =\epsilon IF(\mathbf{y}_{0};\boldsymbol{\mu }_{k-1},F_{0})+o(\epsilon )\ \ \mbox{ and}\ \ \mathbf{V}_{k-1}(F_{\epsilon }) = \mathbf{I}_{p} +\epsilon IF(\mathbf{y}_{0};\mathbf{V}_{k-1},F_{0})+o(\epsilon )}$$

and, further,

$$\displaystyle{\vert \vert \mathbf{z}\vert \vert ^{-1} = \frac{1} {r}\left [1 + \frac{\epsilon } {r}\mathbf{u}^{T}IF(\mathbf{y}_{ 0};\boldsymbol{\mu }_{k-1},F_{0}) + \frac{\epsilon } {2}\mathbf{u}^{T}IF(\mathbf{y}_{ 0};\mathbf{V}_{k-1},F_{0})\mathbf{u} + o(\epsilon )\right ].}$$

Substituting these in (11.15) and having the expectation at F ε gives

$$\displaystyle{IF(\mathbf{y}_{0};\boldsymbol{\mu }_{k},F_{0}) = [E(r^{-1})]^{-1}\mathbf{u}_{ 0} + \frac{1} {p}IF(\mathbf{y}_{0};\boldsymbol{\mu }_{k-1},F_{0}).}$$

Find next the influence function of V k (F). Write (11.8) as

$$\displaystyle{ \mathbf{V}_{k}(F)\,E_{F}\bigg[\frac{(\mathbf{y}-\boldsymbol{\mu }_{k-1}(F))^{T}(\mathbf{y}-\boldsymbol{\mu }_{k-1}(F))} {\vert \vert \mathbf{z}\vert \vert ^{2}} \bigg]-p\,\mathbf{V}_{k-1}^{1/2}(F)\,E_{ F}\left [\mathbf{S}(\mathbf{z})\mathbf{S}^{T}(\mathbf{z})\right ]\,\mathbf{V}_{ k-1}^{1/2}(F) = 0, }$$

where again \(\mathbf{z} = \mathbf{V}_{k-1}^{-1/2}(F)(\mathbf{y} -\mathbf{\boldsymbol{\mu }}_{k-1}(F))\). Proceeding then as in the proof for \(\boldsymbol{\mu }_{k}(F)\), we get

$$\displaystyle{IF(\mathbf{y}_{0};\mathbf{V}_{k},F_{0}) = \frac{2} {p + 2}IF(\mathbf{y}_{0};\mathbf{V}_{k-1},F_{0}) + p\left [\mathbf{u}_{0}\mathbf{u}_{0}^{T} -\frac{1} {p}\mathbf{I}_{p}\right ].}$$

The result then follows from the above recursive formulas for \(IF(\mathbf{y};\boldsymbol{\mu }_{k},F_{0})\) and IF(y; V k , F 0). □ 

Proof (Theorem 11.2).

Consider first the limiting distribution of 1-step HR location estimator. Let \(\mathbf{y}_{i},\ldots,\mathbf{y}_{n}\) be a sample from a spherically symmetric distribution F 0 and write \(r_{i} =\Vert \mathbf{y}_{i}\Vert\) and \(\mathbf{u}_{i} = r_{i}^{-1}\mathbf{y}_{i}\). Further, as we assume that \(\hat{\boldsymbol{\mu }}_{0}\) and \(\hat{\mathbf{V}} _{0}\) are \(\sqrt{n}\)-consistent, we write \(\boldsymbol{\mu }_{0}^{{\ast}}:= \sqrt{n}\hat{\boldsymbol{\mu }}_{0}\) and \(\mathbf{V}_{0}^{{\ast}}:= \sqrt{n}(\hat{\mathbf{V}} _{0} -\mathbf{I}_{p})\), where \(\boldsymbol{\mu }_{0}^{{\ast}} = O_{p}(1)\) and V 0  = O p (1). Now using the delta-method as in Taskinen et al. (2010), we get

$$\displaystyle{ \begin{array}{ll} &\sqrt{n}\hat{\boldsymbol{\mu }}_{ 1} =\boldsymbol{\mu }_{ 0}^{{\ast}} + \left [ave\left \{ \frac{1} {r_{i}}\left (1 + \frac{1} {r_{i}\sqrt{n}}\mathbf{u}_{i}^{T}\boldsymbol{\mu }_{0}^{{\ast}} + \frac{1} {2\sqrt{n}}\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\right )\right \}\right ]^{-1} \\ & \cdot \sqrt{n}\,ave\left \{\mathbf{u}_{i} - \frac{1} {\sqrt{n}r_{i}}\boldsymbol{\mu }_{0}^{{\ast}} + \frac{1} {\sqrt{n}r_{i}}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\boldsymbol{\mu }_{0}^{{\ast}} + \frac{1} {2\sqrt{n}}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\right \} + o_{p}(1).\end{array} }$$
(11.16)

As \(\sqrt{n}\hat{\boldsymbol{\mu }}_{0} = \sqrt{n}\,ave\{\gamma _{0}(r_{i})\mathbf{u}_{i}\} + o_{p}(1)\), the asymptotic normality of \(\sqrt{n}\hat{\boldsymbol{\mu }}_{1}\) follows from the Slutsky’s theorem and joint limiting multivariate normality of \(\sqrt{n}\,ave\{\mathbf{u}_{ i}\}\) and \(\boldsymbol{\mu }_{0}^{{\ast}} = \sqrt{n}\hat{\boldsymbol{\mu }}_{0}\) (and \(E[\mathbf{u}_{i}^{T}\mathbf{V}^{{\ast}}\mathbf{u}_{i}] = Tr(\mathbf{V}^{{\ast}}) = 0\)). Equation (11.16) reduces to

$$\displaystyle{\sqrt{n}\hat{\boldsymbol{\mu }}_{1} = p^{-1}\sqrt{n}\,\hat{\boldsymbol{\mu }}_{ 0} + [E(r_{i}^{-1})]^{-1}\,\sqrt{n}\,ave\{\mathbf{u}_{ i}\} + o_{p}(1).}$$

Continuing in a similar way with \(\sqrt{n}\hat{\boldsymbol{\mu }}_{2}\), \(\sqrt{n}\hat{\boldsymbol{\mu }}_{ 3}\), and so on, we finally get

$$\displaystyle{\sqrt{n}\hat{\boldsymbol{\mu }}_{k} = \left (\frac{1} {p}\right )^{k}\sqrt{n}\hat{\boldsymbol{\mu }}_{ 0} + \left [1 -\left (\frac{1} {p}\right )^{k}\right ]p[(p - 1)E(r_{ i}^{-1})]^{-1}\,\sqrt{n}\,ave\{\mathbf{u}_{ i}\} + o_{p}(1).}$$

Thus \(\sqrt{n}\,\hat{\boldsymbol{\mu }}_{ k} = \sqrt{n}\,ave\{\gamma _{k}(r_{i})\mathbf{u}_{i}\} + o_{p}(1).\) and the limiting covariance matrix of \(\sqrt{n}\hat{\boldsymbol{\mu }}_{k}\) equals to \(E[\gamma _{k}^{2}(r)\mathbf{u}\mathbf{u}^{T}] = p^{-1}E[\gamma _{k}^{2}(r)]\mathbf{I}_{p}.\)

The limiting distribution for k-step HR shape estimator can be computed as above starting from 1-step estimator

$$\displaystyle{\hat{\mathbf{V}} _{1} = p\,\left [ave\left \{ \frac{(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})^{T}(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})} {\vert \vert \hat{\mathbf{V}} _{0}^{-1/2}(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})\vert \vert ^{2}}\right \}\right ]^{-1}ave\left \{ \frac{(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})^{T}} {\vert \vert \hat{\mathbf{V}} _{0}^{-1/2}(\mathbf{y}_{i} -\hat{\boldsymbol{\mu }}_{0})\vert \vert ^{2}}\right \}.}$$

Note that the estimator is scaled so that \(Tr(\hat{\mathbf{V}} _{1}) = p\). After some straightforward derivations,

$$\displaystyle{ \begin{array}{ll} &\sqrt{n}(\hat{\mathbf{V}} _{1} -\mathbf{I}_{p}) = \left [1 + \frac{1} {\sqrt{n}}ave\{\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\}\right ]^{-1}\bigg[p\sqrt{n}\left (ave\{\mathbf{u}_{i}\mathbf{u}_{i}^{T}\} -\frac{1} {p}\mathbf{I}_{p}\right ) \\ & + p\,ave\left \{\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T} + \frac{2} {r_{i}\sqrt{n}}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\mathbf{u}_{i}^{T}\boldsymbol{\mu }_{0}^{{\ast}}- \frac{2} {r_{i}\sqrt{n}}\boldsymbol{\mu }_{0}^{{\ast}}\mathbf{u}_{i}^{T} -\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\mathbf{I}_{p}\right \}\bigg] + o_{p}(1). \end{array} }$$

As the joint limiting distribution of \(\sqrt{n}\,(ave\{\mathbf{u}_{ i}\mathbf{u}_{i}^{T}\} - p^{-1}\mathbf{I}_{ p})\) and \(\sqrt{n}(\hat{\mathbf{V}} _{ 0} -\mathbf{I}_{p}) = \sqrt{n}ave\{\alpha _{0}(r_{i})(\mathbf{u}_{i}\mathbf{u}_{i}^{T} - p^{-1}\mathbf{I}_{ p})\} + o_{p}(1)\) is multivariate normal, the asymptotic normality of \(\sqrt{n}(\hat{\mathbf{V}} _{ 1} -\mathbf{I}_{p})\) follows and

$$\displaystyle\begin{array}{rcl} \sqrt{ n}(\hat{\mathbf{V}} _{1} -\mathbf{I}_{p})& =& 2(p + 2)^{-1}\sqrt{n}(\hat{\mathbf{V}} _{ 0} -\mathbf{I}_{p}) + p\sqrt{n}(ave\{\mathbf{u}_{i}\mathbf{u}_{i}^{T}\} - p^{-1}\mathbf{I}_{ p}) + o_{p}(1) {}\\ & =& \sqrt{n}\,ave\{\alpha _{k}(r_{i})(\mathbf{u}_{i}\mathbf{u}_{i}^{T} - p^{-1}\mathbf{I}_{ p})\} + o_{p}(1). {}\\ \end{array}$$

Continuing in the same way, we obtain

$$\displaystyle{ \sqrt{n}(\hat{\mathbf{V}} _{k} -\mathbf{I}_{p}) = \sqrt{n}\,ave\{\alpha _{k}(r_{i})(\mathbf{u}_{i}\mathbf{u}_{i}^{T} - p^{-1}\mathbf{I}_{ p})\} + o_{p}(1). }$$

The limiting covariance matrix of \(\sqrt{n}\,vec(\hat{\mathbf{V}} _{k} -\mathbf{I}_{p})\) is then

$$\displaystyle{ \begin{array}{ll} &E\left [\alpha _{k}^{2}(r)\,vec(\mathbf{u}\mathbf{u}^{T} - p^{-1}\mathbf{I}_{p})vec^{T}(\mathbf{u}\mathbf{u}^{T} - p^{-1}\mathbf{I}_{p})\right ] \\ &\qquad = \frac{E[\alpha _{k}^{2}(r)]} {p(p+2)} (\mathbf{I}_{p^{2}} + \mathbf{K}_{p,p} - 2p^{-1}\mathbf{J}_{p}) = \frac{E[\alpha _{k}^{2}(r)]} {p(p+2)} \mathbf{C}_{p,p}(\mathbf{I}_{p}). \end{array} }$$

 □ 

Proof (Theorem 11.3).

First note that (11.11) is equivalent to

$$\displaystyle{E[\vert \vert \mathbf{e}\vert \vert ^{-1}\mathbf{x}\mathbf{x}^{T}]\,\mathbf{B}_{ k} - E[\vert \vert \mathbf{e}\vert \vert ^{-1}\mathbf{x}\mathbf{x}^{T}]\,\mathbf{B}_{ k-1} - E[\mathbf{x}\mathbf{S}(\mathbf{e})^{T}]\mathbf{V}_{ k-1}^{1/2} = \mathbf{0},}$$

where \(\mathbf{e} = \mathbf{V}_{k-1}^{-1/2}(\mathbf{y} -\mathbf{B}_{k}^{T}\mathbf{x})\). Proceeding as in the Proof of Theorem 11.1, and assuming (without loss of generality) the spherical case with B = 0 and \(\boldsymbol{\varSigma }= \mathbf{I}_{p}\), we end up after some tedious derivations to

$$\displaystyle{ E\bigg[\frac{\mathbf{x}\mathbf{x}^{T}} {r} \bigg]IF(\mathbf{z};\mathbf{B}_{k},F_{0}) - E\bigg[\frac{\mathbf{x}\mathbf{x}^{T}IF(\mathbf{z};\mathbf{B}_{k-1},F_{0})\mathbf{u}\mathbf{u}^{T}} {r} \bigg] -\mathbf{x}\mathbf{u}^{T} = \mathbf{0}, }$$

where y = r u with r = | | y | | and \(\mathbf{u} = \mathbf{y}/r\). As \(E[\mathbf{u}\mathbf{u}^{T}] = p^{-1}\mathbf{I}_{p}\), this simplifies to

$$\displaystyle{IF(\mathbf{z};\mathbf{B}_{k},F_{0}) = \frac{1} {p}IF(\mathbf{z};\mathbf{B}_{k-1},F_{0}) + E[\mathbf{x}\mathbf{x}^{T}]^{-1} \frac{\mathbf{x}\mathbf{u}^{T}} {E[r^{-1}]},}$$

and as the influence functions for all k are of the same type, we get

$$\displaystyle{ IF(\mathbf{z};\mathbf{B}_{k},F_{0}) = \left (\frac{1} {p}\right )^{k}IF(\mathbf{z};\mathbf{B}_{ 0},F_{0})+\left [1 -\left (\frac{1} {p}\right )^{k}\right ]\,E[\mathbf{x}\mathbf{x}^{T}]^{-1}p\,[(p-1)E(r^{-1})]^{-1}\,\mathbf{x}\mathbf{u}^{T}. }$$

 □ 

Proof (Theorem 11.4).

Consider first the general case, where \(\hat{\mathbf{B}} _{0}\) and \(\hat{\mathbf{V}} _{0}\) are assumed to be any \(\sqrt{n}\)-consistent estimators and write \(\mathbf{B}_{0}^{{\ast}} = \sqrt{n}\hat{\mathbf{B}} _{0}\) and \(\mathbf{V}_{0}^{{\ast}} = \sqrt{n}(\hat{\mathbf{V}} _{0} -\mathbf{I}_{p})\), where B 0  = O p (1) and V 0  = O p (1).

Without loss of generality, assume that B = 0 and \(\boldsymbol{\varSigma }= \mathbf{I}_{p}\) so that \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) is a random sample from a spherical distribution with zero mean vector zero and identity covariance matrix. Write \(r_{i} =\Vert \mathbf{y}_{i}\Vert\) and \(\mathbf{u}_{i} = \mathbf{y}_{i}/r_{i}\). Now as in the proof of Theorem 11.2, the 1-step HR regression estimator may be written as

$$\displaystyle{ \begin{array}{ll} &\sqrt{n}\hat{\mathbf{B}} _{1} = \mathbf{B}_{0}^{{\ast}} + \left [ave\left \{ \frac{1} {r_{i}}\left (1 + \frac{1} {\sqrt{n}r_{i}}\mathbf{x}_{i}^{T}\mathbf{B}_{0}^{{\ast}}\mathbf{u}_{i} + \frac{1} {2\sqrt{n}r_{i}}\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\right )\mathbf{x}_{i}\mathbf{x}_{i}^{T}\right \}\right ]^{-1} \\ & \cdot \sqrt{n}\,ave\left \{\mathbf{x}_{i}\left (\mathbf{u}_{i}^{T} - \frac{1} {\sqrt{n}r_{i}}\mathbf{x}_{i}^{T}\mathbf{B}_{0}^{{\ast}} + \frac{1} {\sqrt{n}r_{i}}\mathbf{x}_{i}^{T}\mathbf{B}_{0}^{{\ast}}\mathbf{u}_{i}\mathbf{u}_{i}^{T} + \frac{1} {2\sqrt{n}}\mathbf{u}_{i}^{T}\mathbf{V}_{0}^{{\ast}}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\right )\right \} + o_{p}(1). \end{array} }$$

The limiting multivariate normality of \(\sqrt{n}\hat{\mathbf{B}}_{ 1}\) then follows from the joint limiting multivariate normality of \(\mathbf{B}_{0}^{{\ast}} = \sqrt{n}\hat{\mathbf{B}} _{0}\) and \(\sqrt{n}ave\{\mathbf{x}_{ i}\mathbf{u}_{i}^{T}\}\) and the Slutsky’s theorem. The above equation then reduces to

$$\displaystyle{\sqrt{n}\hat{\mathbf{B}} _{1} = p^{-1}\sqrt{n}\hat{\mathbf{B}} _{ 0} + [E(r_{i}^{-1})]^{-1}\mathbf{D}^{-1}\sqrt{n}\,ave\{\mathbf{x}_{ i}\mathbf{u}_{i}^{T}\} + o_{ p}(1),}$$

where D = E[xx T], and for the k-step HR regression estimator we get

$$\displaystyle{\sqrt{n}\hat{\mathbf{B}} _{k} = \left (\frac{1} {p}\right )^{k}\sqrt{n}\hat{\mathbf{B}} _{ 0} +\left [1 -\left (\frac{1} {p}\right )^{k}\right ]p[(p-1)E(r_{ i}^{-1})]^{-1}\,\mathbf{D}^{-1}\sqrt{n}\,ave\{\mathbf{x}_{ i}\mathbf{u}_{i}^{T}\}+o_{ p}(1)}$$

again with a limiting multivariate normality.

Let us next consider the simple case, where the initial estimator is of the type

$$\displaystyle{ \sqrt{n}\,\hat{\mathbf{B}}_{k} = \mathbf{D}^{-1}\sqrt{n}\,ave\{\eta _{ k}(r_{i})\mathbf{x}_{i}\mathbf{u}_{i}^{T}\} + o_{ p}(1), }$$

where η k is given in (11.14). The covariance matrix of \(\sqrt{n}\,vec(\hat{\mathbf{B}} _{k})\) then equals to

$$\displaystyle{ \begin{array}{ll} &E[\eta _{k}^{2}(r)\,vec(\mathbf{D}^{-1}\mathbf{x}\mathbf{u}^{T})vec^{T}(\mathbf{D}^{-1}\mathbf{x}\mathbf{u}^{T})] \\ & = E[\eta _{k}^{2}(r)](\mathbf{I}_{p} \otimes \mathbf{D}^{-1})vec(\mathbf{x}\mathbf{u}^{T})vec^{T}(\mathbf{x}\mathbf{u}^{T})](\mathbf{I}_{p} \otimes \mathbf{D}^{-1}) \\ & = E[\eta _{k}^{2}(r)](\mathbf{I}_{p} \otimes \mathbf{D}^{-1})(E(\mathbf{u}\mathbf{u}^{T}) \otimes \mathbf{D})(\mathbf{I}_{p} \otimes \mathbf{D}^{-1}) \\ & = p^{-1}E[\eta _{k}^{2}(r)](\mathbf{I}_{p} \otimes \mathbf{D}^{-1}). \end{array} }$$

 □ 

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Taskinen, S., Oja, H. (2016). Influence Functions and Efficiencies of k-Step Hettmansperger–Randles Estimators for Multivariate Location and Regression. In: Liu, R., McKean, J. (eds) Robust Rank-Based and Nonparametric Methods. Springer Proceedings in Mathematics & Statistics, vol 168. Springer, Cham. https://doi.org/10.1007/978-3-319-39065-9_11

Download citation

Publish with us

Policies and ethics