Skip to main content

Notes on the Estimation of Item Response Theory Models

  • Conference paper
  • First Online:
New Developments in Quantitative Psychology

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 66))

  • 1720 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Bentler, P. M., & Tanaka, J. S. (1983). Problems with EM algorithms for ML factor analysis. Psychometrika, 48(2), 247–251.

    Article  Google Scholar 

  • Berndt, E., Hall, B., Hall, R., & Hausman, J. (1974). Estimation and inference in nonlinear structural models. Annals of Economic and Social Measurement, 3, 653–666.

    Google Scholar 

  • Bock, R. D., & Aitkin, M. (1981). Marginal maximum likelihood estimation of item parameters: Application of an EM algorithm. Psychometrika, 46(4), 443–459.

    Article  MathSciNet  Google Scholar 

  • Bock, R. D., & Lieberman, M. (1970). Fitting a response model for n dichotomously scored items. Psychometrika, 35, 179–197.

    Article  Google Scholar 

  • Lesaffre, E., & Spiessens, B. (2001). On the effect of the number of quadrature points in a logistic random effects model: an example. Journal of the Royal Statistical Society, Series C, 50, 325–335.

    Article  MathSciNet  MATH  Google Scholar 

  • Liu, Q., & Pierce, D. (1994). A note on Gauss–Hermite quadrature. Biometrika, 81(3), 624.

    MathSciNet  MATH  Google Scholar 

  • McLachlan, G., & Krishan, T. (2007). The EM algorithm and extensions. Hoboken, NJ: Wiley.

    Google Scholar 

  • Meng, X. L., & Rubin, D. B. (1991). Using EM to obtain asymptotic variance-covariance matrices: the SEM algorithm. Journal of the American Statistical Association, 86(416), 899–909.

    Article  Google Scholar 

  • Rabe-Hesketh, S., Skrondal, A., & Pickles, A. (2002). Reliable estimation of generalized linear mixed models using adaptive quadrature. The Stata Journal, 2(1), 1–21.

    MATH  Google Scholar 

  • Schilling, S., & Bock, R. D. (2005). High-dimensional maximum marginal likelihood item factor analysis by adaptive quadrature. Psychometrika, 70(3), 533–555.

    MathSciNet  MATH  Google Scholar 

  • Skrondal, A., & Rabe-Hesketh, S. (2004). Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models. Boca Raton, FL: Chapman & Hall/CRC.

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinming An .

Editor information

Editors and Affiliations

Appendices

Appendix 1: Technical Details for EM

The conditional distribution f(η | u i , θ(t)) is

$$\displaystyle{ f(\eta \vert u_{i}{,\theta }^{(t)}) = \frac{f(u_{i}\vert \eta {,\theta }^{(t)})\phi (\eta )} {\int f(u_{i}\vert \eta {,\theta }^{(t)})\phi (\eta )d\eta } = \frac{f(u_{i}\vert \eta {,\theta }^{(t)})\phi (\eta )} {f(u_{i})}. }$$
(14)

Then these conditional expectations involved in the Q function can be expressed as follows:

$$\displaystyle{ E[\log P_{ij}\vert u_{i}{,\theta }^{(t)}] =\int \log P_{ ij}f(\eta \vert u_{i}{,\theta }^{(t)})d\eta, }$$
(15)
$$\displaystyle{ E[\log (1 - P_{ij})\vert u_{i}{,\theta }^{(t)}] =\int \log (1 - P_{ ij})f(\eta \vert u_{i}{,\theta }^{(t)})d\eta, }$$
(16)

and

$$\displaystyle{ E[\log \phi (\eta )\vert u_{i}{,\theta }^{(t)}] =\int \log \phi (\eta )f(\eta \vert u_{ i}{,\theta }^{(t)})d\eta. }$$
(17)

Then we have

$$\displaystyle{ \begin{array}{lll} Q_{1j}& =&\int \sum \limits _{i=1}^{N}\left [u_{ij}\log P_{ij}f(\eta \vert u_{i}{,\theta }^{(t)})+(1-u_{ij})\log (1-P_{ij})f(\eta \vert u_{i}{,\theta }^{(t)})\right ]d\eta \\ & =&\int \left [\log P_{ij}\left [\sum \limits _{i=1}^{N}u_{ij}f(\eta \vert u_{i}{,\theta }^{(t)})\right ]+\log (1-P_{ij})\left [\sum _{i=1}^{N}(1-u_{ij})f(\eta \vert u_{i}{,\theta }^{(t)})\right ]\right ]d\eta \\ & =&\int \left [\log P_{ij}r_{j}{(\theta }^{(t)})+\log (1-P_{ij})[n{(\theta }^{(t)})-r_{j}{(\theta }^{(t)})]\right ]\phi (\eta {\vert \theta }^{(t)}))d\eta,\\ \end{array} }$$
(18)

where \(r_{j}{(\theta }^{(t)}) =\sum _{ i=1}^{N}u_{ij}\frac{f(u_{i}\vert \eta {,\theta }^{(t)})} {f(u_{i})}\), and \(n{(\theta }^{(t)}) =\sum _{ i=1}^{N}\frac{f(u_{i}\vert \eta {,\theta }^{(t)})} {f(u_{i})}\).

Integrations in the equations above can be approximated as follows using G–H quadrature. Note that these quadrature points, x g , and weights, w g , correspond to ϕ(η | θ(t)) which is the density function of N(0, Φ(t)).

$$\displaystyle{ \tilde{Q}_{1j} =\sum _{ g=1}^{G}\left [\log P_{ ij}(x_{g})r_{j}(x_{g}{,\theta }^{(t)}) +\log (1 - P_{ ij}(x_{g}))(n(x_{g}{,\theta }^{(t)}) - r_{ j}(x_{g}{,\theta }^{(t)}))\right ]w_{ g}. }$$
(19)

We take the derivatives of Q1j with respect to model parameters

$$\displaystyle{ \frac{\partial \tilde{Q}_{1j}} {\partial \alpha _{j}} =\sum _{ g=1}^{G}\left [\frac{r_{j}(x_{g}{,\theta }^{(t)})} {P_{ij}(x_{g})} -\frac{n(x_{g}{,\theta }^{(t)}) - r_{j}(x_{g}{,\theta }^{(t)})} {1 - P_{ij}(x_{g})} \right ]\frac{\partial P_{ij}(x_{g})} {\partial \alpha _{j}} w_{g}, }$$
(20)
$$\displaystyle{ \frac{\partial \tilde{Q}_{1j}} {\partial \lambda _{j}} =\sum _{ g=1}^{G}\left [\frac{r_{j}(x_{g}{,\theta }^{(t)})} {P_{ij}(x_{g})} -\frac{n(x_{g}{,\theta }^{(t)}) - r_{j}(x_{g}{,\theta }^{(t)})} {1 - P_{ij}(x_{g})} \right ]\frac{\partial P_{ij}(x_{g})} {\partial \lambda _{j}} w_{g}, }$$
(21)
$$\displaystyle{ \frac{{\partial }^{2}\tilde{Q}_{1j}} {\partial \alpha _{j}^{2}} =\sum _{ g=1}^{G}\left [\left [ \frac{-r_{j}} {P_{ij}^{2}} - \frac{n - r_{j}} {{(1 - P_{ij})}^{2}}\right ]{\left [\frac{\partial P_{ij}(x_{g})} {\partial \alpha _{j}} \right ]}^{2} + \left [ \frac{r_{j}} {P_{ij}} - \frac{n - r_{j}} {1 - P_{ij}}\right ]\frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \alpha _{j}^{2}} \right ]w_{g}, }$$
(22)
$$\displaystyle{ \frac{{\partial }^{2}\tilde{Q}_{1j}} {\partial \lambda _{j}^{2}} =\sum _{ g=1}^{G}\left [\left [ \frac{-r_{j}} {P_{ij}^{2}} - \frac{n - r_{j}} {{(1 - P_{ij})}^{2}}\right ]{\left [\frac{\partial P_{ij}(x_{g})} {\partial \lambda _{j}} \right ]}^{2} + \left [ \frac{r_{j}} {P_{ij}} - \frac{n - r_{j}} {1 - P_{ij}}\right ]\frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \lambda _{j}^{2}} \right ]w_{g}, }$$
(23)

and

$$\displaystyle{ \frac{{\partial }^{2}\tilde{Q}_{1j}} {\partial \alpha _{j}\partial \lambda _{j}} =\sum _{ g=1}^{{G}^{d} }\left [\left [ \frac{-r_{j}} {P_{ij}^{2}} - \frac{n - r_{j}} {{(1 - P_{ij})}^{2}}\right ]\left [\frac{\partial P_{ij}(x_{g})} {\partial \alpha _{j}} \frac{\partial P_{ij}(x_{g})} {\partial \lambda _{j}} \right ] + \left [ \frac{r_{j}} {P_{ij}} - \frac{n - r_{j}} {1 - P_{ij}}\right ]\frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \alpha _{j}\partial \lambda _{j}} \right ]w_{g}. }$$
(24)

In the above equations, we have

$$\displaystyle{ \frac{\partial P_{ij}(x_{g})} {\partial \alpha _{j}} = -\phi (\alpha _{j} -\lambda _{j}x_{g}) = -\frac{\partial Q_{ij}(x_{g})} {\partial \alpha _{j}}, }$$
(25)
$$\displaystyle{ \frac{\partial P_{ij}(x_{g})} {\partial \lambda _{j}} =\phi (\alpha _{j} -\lambda _{j}x_{g})x_{g} = -\frac{\partial Q_{ij}(x_{g})} {\partial \lambda _{j}}, }$$
(26)
$$\displaystyle{ \frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \alpha _{j}^{2}} = -\frac{\partial \phi (\alpha _{j} -\lambda _{j}x_{g})} {\partial \alpha _{j}} =\phi (\alpha _{j} -\lambda _{j}x_{g})(\alpha _{j} -\lambda _{j}x_{g}) = -\frac{{\partial }^{2}Q_{ij}(x_{g})} {\partial \alpha _{j}^{2}}, }$$
(27)
$$\displaystyle{ \frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \alpha _{j}\partial \lambda _{j}} = -\frac{\partial \phi (\alpha _{j} -\lambda _{j}x_{g})} {\partial \lambda _{j}} = -\phi (\alpha _{j} -\lambda _{j}x_{g})(\alpha _{j} -\lambda _{j}x_{g})x_{g} = -\frac{{\partial }^{2}Q_{ij}(x_{g})} {\partial \alpha _{j}\partial \lambda _{j}}, }$$
(28)

and

$$\displaystyle{ \frac{{\partial }^{2}P_{ij}(x_{g})} {\partial \lambda _{j}^{2}} = \frac{\partial \phi (\alpha _{j} -\lambda _{j}x_{g})x_{g}} {\partial \lambda _{j}} =\phi (\alpha _{j} -\lambda _{j}x_{g})(\alpha _{j} -\lambda _{j}x_{g})x_{g}^{2} = -\frac{{\partial }^{2}Q_{ ij}(x_{g})} {\partial \lambda _{j}^{2}}.}$$
(29)

Appendix 2: Technical Details for the Quasi-Newton Algorithm

For our objective function, \(Log\tilde{L}(\theta )\), the first derivative with respect to θ j , the latent trait for the jth item, is

$$\displaystyle{ \frac{\partial \log \tilde{L}(\theta \vert U)} {\partial \theta _{j}} =\sum _{ i=1}^{N}\left [{(\tilde{L}_{ i})}^{-1}\frac{\partial \tilde{L}_{i}} {\partial \theta _{j}} \right ] =\sum _{ i=1}^{N}\left [{(\tilde{L}_{ i})}^{-1}\sum _{ g=1}^{G}\left [\frac{\partial f_{i}(x_{g})} {\partial \theta _{j}} w_{g}\right ]\right ], }$$
(30)

where

$$\displaystyle{ \tilde{L}_{i} =\sum _{ g=1}^{G}\left [\prod _{ j=1}^{J}{(P_{ ij}(x_{g}))}^{u_{ij} }{(Q_{ij}(x_{g}))}^{1-u_{ij} }\right ]w_{g} =\sum _{ g=1}^{G}f_{ i}(x_{g})w_{g}, }$$
(31)
$$\displaystyle{ \frac{\partial f_{i}(x_{g})} {\partial \theta _{j}} = \frac{\partial [P_{ij}{(x_{g})}^{u_{ij}}Q_{ij}{(x_{g})}^{1-u_{ij}}]} {\partial \theta _{j}} \frac{f_{i}(x_{g})} {P_{ij}{(x_{g})}^{u_{ij}}Q_{ij}{(x_{g})}^{1-u_{ij}}}. }$$
(32)

For the probit link

$$\displaystyle{ \frac{\partial P_{ij}(x_{g})} {\partial \alpha _{j}} = -\phi (\alpha _{j} -\lambda _{j}x_{g}) = -\frac{\partial Q_{ij}(x_{g})} {\partial \alpha _{j}}, }$$
(33)
$$\displaystyle{ \frac{\partial P_{ij}(x_{g})} {\partial \lambda _{j}} =\phi (\alpha _{j} -\lambda _{j}x_{g})x_{g} = -\frac{\partial Q_{ij}(x_{g})} {\partial \lambda _{j}}. }$$
(34)

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this paper

Cite this paper

An, X., Yung, YF. (2013). Notes on the Estimation of Item Response Theory Models. In: Millsap, R.E., van der Ark, L.A., Bolt, D.M., Woods, C.M. (eds) New Developments in Quantitative Psychology. Springer Proceedings in Mathematics & Statistics, vol 66. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-9348-8_19

Download citation

Publish with us

Policies and ethics