Advertisement

Sankhya B

, Volume 80, Issue 2, pp 222–262 | Cite as

Testing Composite Hypothesis Based on the Density Power Divergence

  • A. Basu
  • A. Mandal
  • N. Martin
  • L. Pardo
Article
  • 21 Downloads

Abstract

In any parametric inference problem, the robustness of the procedure is a real concern. A procedure which retains a high degree of efficiency under the model and simultaneously provides stable inference under data contamination is preferable in any practical situation over another procedure which achieves its efficiency at the cost of robustness or vice versa. The density power divergence family of Basu et al. (Biometrika 85, 549–559 1998) provides a flexible class of divergences where the adjustment between efficiency and robustness is controlled by a single parameter β. In this paper we consider general tests of parametric hypotheses based on the density power divergence. We establish the asymptotic null distribution of the test statistic and explore its asymptotic power function. Numerical results illustrate the performance of the theory developed.

Keywords

Density power divergence linear combination of chi-squares robustness tests of hypotheses. 

Subject Classification

Primary 62F03 Secondary 62F35 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

This work was partially supported by Grant MTM-2015-67057-P. The authors gratefully acknowledge the suggestions of two anonymous referees which led to an improved version of the paper. The authors would like to thank Dr. Abhik Ghosh for preparing the plot of the influence function.

References

  1. Aitchison, J. and Silvey, S. (1958). Maximum-likelihood estimation of parameters subject to restraints. Ann. Math. Stat., 813–828.Google Scholar
  2. Basu, A., Harris, I.R., Hjort, N.L. and Jones, M.C. (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85, 549–559.MathSciNetCrossRefGoogle Scholar
  3. Basu, A., Shioya, H. and Park C. (2011). Statistical inference: the minimum distance approach. CRC Press, Boca Raton.zbMATHGoogle Scholar
  4. Basu, A., Mandal, A., Martin, N. and Pardo, L. (2013). Testing statistical hypotheses based on the density power divergence. Ann. Inst Statist. Math. 65, 319–348.MathSciNetCrossRefGoogle Scholar
  5. Basu, A., Mandal, A., Martin, N. and Pardo, L. (2015). Robust tests for the equality of two normal means based on the density power divergence. Metrika 78, 611–634.MathSciNetCrossRefGoogle Scholar
  6. Basu, A., Mandal, A., Martin, N. and Pardo, L. (2016). Generalized wald-type tests based on minimum density power divergence estimators. Statistics 50, 1–26.MathSciNetCrossRefGoogle Scholar
  7. Broniatowski, M., Toma, A. and Vajda, I. (2012). Decomposable pseudodistances and applications in statistical estimation. J. Statist. Plann. Inference 142, 2574–2585.MathSciNetCrossRefGoogle Scholar
  8. Darwin, C. (1878). The effects of cross and self fertilization in the vegetable kingdom. John Murray, London.CrossRefGoogle Scholar
  9. Davies, R.B. (1980). The distribution of a linear combination of χ 2 random variables. Algorithm AS155. Appl. Statist. 29, 323–333.CrossRefGoogle Scholar
  10. De Angelis, D. and Young, G.A. (1992). Smoothing the bootstrap. Internat Statist. Rev. 60, 45–56.CrossRefGoogle Scholar
  11. Dik, J.J. and de Gunst, M.C.M. (1985). The distribution of general quadratic forms in normal variables. Statist. Neerlandica 39, 14–26.MathSciNetCrossRefGoogle Scholar
  12. Dixon, W.J. and Tukey, J.W. (1968). Approximate behavior of the distribution of winsorized t (trimming/winsorization 2). Technometrics 10, 83–98.MathSciNetGoogle Scholar
  13. Eckler, A.R. (1969). A survey of coverage problems associated with point and area targets. Technometrics 11, 561–589.CrossRefGoogle Scholar
  14. Fisher, R. (1966). The design of experiments. Hafner Press, New York.Google Scholar
  15. Fraser, D.A.S. (1957). Most powerful rank-type tests. Ann. Math. Statist 28, 1040–1043.MathSciNetCrossRefGoogle Scholar
  16. Ghosh, A. and Basu, A. (2013). Robust estimation for independent non-homogeneous observations using density power divergence with applications to linear regression. Electron. J Stat. 7, 2420–2456.MathSciNetCrossRefGoogle Scholar
  17. Ghosh, A. and Basu, A. (2016). Testing composite null hypotheses based on S-divergences. Stat. Probab. Lett. 114, 38–47.MathSciNetCrossRefGoogle Scholar
  18. Gupta, S.S. (1963). Bibliography on the multivariate normal integrals and related topics. Ann. Math. Statist. 34, 829–838.MathSciNetCrossRefGoogle Scholar
  19. Johnson, N.L. and Kotz, S. (1968). Tables of distributions of positive definite quadratic forms in central normal variables. Sankhyā Series B 30, 303–314.MathSciNetGoogle Scholar
  20. Jones, M.C., Hjort, N.L., Harris, I.R. and Basu, A. (2001). A comparison of related density-based minimum divergence estimators. Biometrika 88, 865–873.MathSciNetCrossRefGoogle Scholar
  21. Lehmann, E.L. (1983). Theory of point estimation. Wiley series in probability and mathematical statistics. Wiley, New York.Google Scholar
  22. Lindsay, B.G. (1994). Efficiency versus robustness: the case for minimum Hellinger distance and related methods. Ann. Statist. 22, 1081–1114.MathSciNetCrossRefGoogle Scholar
  23. Martín, N. and Balakrishnan, N. (2013). Hypothesis testing in a generic nesting framework for general distributions. J. Multivariate Anal. 118, 1–23.MathSciNetCrossRefGoogle Scholar
  24. Pardo, L. (2006). Statistical inference based on divergence measures. Chapman & Hall/CRC, Boca Raton.zbMATHGoogle Scholar
  25. Sen, P.K., Singer, J.M. and de Lima, A.C.P. (2010). From finite sample to asymptotic methods in statistics. Cambridge University Press.Google Scholar
  26. Silvapulle, M.J. and Sen, P.K. (2011). Constrained statistical inference: order, inequality, and shape constraints, volume 912. Wiley.Google Scholar
  27. Silvey, S.D. (1975). Reprinting, Monographs on Statistical Subjects. Chapman and Hall, London.Google Scholar
  28. Simpson, D.G. (1989). Hellinger deviance tests: efficiency, breakdown points, and examples. J. Amer. Statist. Assoc. 84, 107–113.MathSciNetCrossRefGoogle Scholar
  29. Solomon, H. (1960). Distribution of quadratic forms: tables and applications. Applied mathematics and statistics laboratories. Stanford University Stanford, California.Google Scholar
  30. Toma, A. and Broniatowski, M. (2011). Dual divergence estimators and tests: robustness results. J. Multivariate Anal. 102, 20–36.MathSciNetCrossRefGoogle Scholar
  31. Toma, A. and Leoni-Aubin, S. (2010). Robust tests based on dual divergence estimators and saddlepoint approximations. J. Multivariate Anal. 101, 1143–1155.MathSciNetCrossRefGoogle Scholar
  32. Warwick, J. and Jones, M. T. (2005). Choosing a robustness parameter. J. Stat. Comput. Simulation 75, 581–588.MathSciNetCrossRefGoogle Scholar
  33. Welch, W.J. (1987). Rerandomizing the median in matched-pairs designs. Biometrika 74, 609–614.MathSciNetCrossRefGoogle Scholar
  34. White, H. (1982). Maximum likelihood estimation of misspecified models. Econometrica 50, 1–25.MathSciNetCrossRefGoogle Scholar

Copyright information

© Indian Statistical Institute 2017

Authors and Affiliations

  1. 1.Interdisciplinary Statistical Research UnitIndian Statistical InstituteKolkataIndia
  2. 2.Department of MathematicsWayne State UniversityDetroitUSA
  3. 3.Department of Statistics and O.R. IIComplutense University of MadridMadridSpain
  4. 4.Department of Statistics and O.R. IComplutense University of MadridMadridSpain

Personalised recommendations