Advertisement

Abstract

This chapter looks at a selection of miscellaneous topics, such as calculation of working weights by simulated Fisher scoring (SFS), information criteria for model selection, and bias-reduction (for GLMs). The latter has been used to obtain a finite solution to completely separated binary data.

Keywords

Multinomial Logit Model Score Vector Probit Link Criterion Bayesian Information Criterion Expect Information Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Akaike, H. 1973. Information theory and an extension of the maximum likelihood principle. In B. N. Petrov and F. Csáki (Eds.), Second International Symposium on Information Theory, pp. 267–281. Budapest: Akadémiai Kaidó.Google Scholar
  2. Albert, A. and J. A. Anderson 1984. On the existence of maximum likelihood estimates in logistic regression models. Biometrika 71(1):1–10.zbMATHMathSciNetCrossRefGoogle Scholar
  3. Berndt, E. K., B. H. Hall, R. E. Hall, and J. A. Hausman 1974. Estimation and inference in nonlinear structural models. Ann. Econ. and Soc. Measur. 3–4: 653–665.Google Scholar
  4. Burnham, K. P. and D. R. Anderson 2002. Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach (Second ed.). New York: Springer.Google Scholar
  5. Cheney, W. and D. Kincaid 2012. Numerical Mathematics and Computing (Seventh ed.). Boston: Brooks/Cole.Google Scholar
  6. Claeskens, G. and N. L. Hjort 2008. Model Selection and Model Averaging. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press.Google Scholar
  7. Faraway, J. J. 2015. Linear Models with R (Second ed.). Boca Raton: Chapman & Hall/CRC.zbMATHGoogle Scholar
  8. Firth, D. 1993. Bias reduction of maximum likelihood estimates. Biometrika 80(1):27–38.zbMATHMathSciNetCrossRefGoogle Scholar
  9. Fox, J. and S. Weisberg 2011. An R Companion to Applied Regression (Second ed.). Thousand Oaks: Sage Publications.Google Scholar
  10. Gill, J. and G. King 2004. What to do when your Hessian is not invertible: Alternatives to model respecification in nonlinear estimation. Sociological Methods & Research 33(1):54–87.MathSciNetCrossRefGoogle Scholar
  11. Greene, W. H. 2012. Econometric Analysis (Seventh ed.). Upper Saddle River: Prentice Hall.Google Scholar
  12. Hurvich, C. M. and C.-L. Tsai 1989. Regression and time series model selection in small samples. Biometrika 76(2):297–307.zbMATHMathSciNetCrossRefGoogle Scholar
  13. Kennedy, William J., J. and J. E. Gentle 1980. Statistical Computing. New York, USA: Marcel Dekker.Google Scholar
  14. Konishi, S. and G. Kitagawa 2008. Information Criteria and Statistical Modeling. Springer Series in Statistics. New York, USA: Springer.Google Scholar
  15. Kosmidis, I. 2014a. Bias in parametric estimation: reduction and useful side-effects. WIREs Computational Statistics 6:185–196.CrossRefGoogle Scholar
  16. Kosmidis, I. and D. Firth 2009. Bias reduction in exponential family nonlinear models. Biometrika 96(4):793–804.zbMATHMathSciNetCrossRefGoogle Scholar
  17. Kosmidis, I. and D. Firth 2010. A generic algorithm for reducing bias in parametric estimation. Electronic Journal of Statistics 4:1097–1112.zbMATHMathSciNetCrossRefGoogle Scholar
  18. Lange, K. 2010. Numerical Analysis for Statisticians (Second ed.). New York, USA: Springer.zbMATHCrossRefGoogle Scholar
  19. Lange, K. 2013. Optimization (Second ed.). New York, USA: Springer.zbMATHCrossRefGoogle Scholar
  20. Lesaffre, E. and A. Albert 1989. Partial separation in logistic discrimination. Journal of the Royal Statistical Society, Series B 51(1):109–116.zbMATHMathSciNetGoogle Scholar
  21. Miller, A. 2002. Subset Selection in Regression (Second ed.). Boca Raton, FL, USA: Chapman & Hall/CRC.zbMATHCrossRefGoogle Scholar
  22. Osborne, M. R. 1992. Fisher’s method of scoring. International Statistical Review 60(1):99–117.zbMATHCrossRefGoogle Scholar
  23. Osborne, M. R. 2006. Least squares methods in maximum likelihood problems. Optimization Methods and Software 21(6):943–959.zbMATHMathSciNetCrossRefGoogle Scholar
  24. Ripley, B. D. 2004. Selecting amongst large classes of models. See Adams et al. (2004), pp. 155–170.Google Scholar
  25. Rose, C. and M. D. Smith 2002. Mathematical Statistics with Mathematica. New York, USA: Springer.zbMATHCrossRefGoogle Scholar
  26. Rose, C. and M. D. Smith 2013. Mathematical Statistics with Mathematica. eBook.Google Scholar
  27. Sakamoto, Y., M. Ishiguro, and G. Kitagawa 1986. Akaike Information Criterion Statistics. Dordrecht, Netherlands: D. Reidel Publishing Company.zbMATHGoogle Scholar
  28. Schwarz, G. 1978. Estimating the dimension of a model. The Annals of Statistics 6(2):461–464.zbMATHMathSciNetCrossRefGoogle Scholar
  29. Venables, W. N. and B. D. Ripley 2002. Modern Applied Statistics With S (4th ed.). New York, USA: Springer-Verlag.zbMATHCrossRefGoogle Scholar
  30. Yee, T. W. and A. G. Stephenson 2007. Vector generalized linear and additive extreme value models. Extremes 10(1–2):1–19.zbMATHMathSciNetCrossRefGoogle Scholar

Copyright information

© Thomas Yee 2015

Authors and Affiliations

  • Thomas W. Yee
    • 1
  1. 1.Department of StatisticsUniversity of AucklandAucklandNew Zealand

Personalised recommendations