Skip to main content

Learning Continuous-Time Hidden Markov Models for Event Data

  • Chapter
  • First Online:
Mobile Health

Abstract

The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive modeling tool for mHealth data that takes the form of events occurring at irregularly-distributed continuous time points. However, the lack of an efficient parameter learning algorithm for CT-HMM has prevented its widespread use, necessitating the use of very small models or unrealistic constraints on the state transitions. In this paper, we describe recent advances in the development of efficient EM-based learning methods for CT-HMM models. We first review the structure of the learning problem, demonstrating that it consists of two challenges: (1) the estimation of posterior state probabilities and (2) the computation of end-state conditioned expectations. The first challenge can be addressed by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by exploiting computational methods traditionally used for continuous-time Markov chains and adapting them to the CT-HMM domain. We describe three computational approaches and analyze the tradeoffs between them. We evaluate the resulting parameter learning methods in simulation and demonstrate the use of models with more than 100 states to analyze disease progression using glaucoma and Alzheimer’s Disease datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.cbs.gatech.edu/CT-HMM

  2. 2.

    Note that a version of Eq. (20) appears in [21], but that version contains a small typographic error.

  3. 3.

    Data were obtained from the ADNI database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). For up-to-date information, see http://www.adni-info.org.

  4. 4.

    http://www.cbs.gatech.edu/CT-HMM

References

  1. Al-Mohy, A.H., Higham, N.J.: Computing the Fréchet derivative of the matrix exponential, with an application to condition number estimation. SIAM Journal on Matrix Analysis and Applications 30(4), 1639–1657 (2009)

    Article  MATH  Google Scholar 

  2. Al-Mohy, A.H., Higham, N.J.: Computing the action of the matrix exponential, with an application to exponential integrators. SIAM Journal on Scientific Computing 33(2), 488–511 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bartolomeo, N., Trerotoli, P., Serio, G.: Progression of liver cirrhosis to HCC: An application of hidden Markov model. BMC Medical Research Methodology 11(38) (2011)

    Google Scholar 

  4. Bauer, F.L., Fike, C.T.: Norms and exclusion theorems. Numerische Mathematik 2(1), 137–141 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bindel, D., Goodman, J.: Principles of scientific computing (2009)

    Google Scholar 

  6. Bladt, M., Sørensen, M.: Statistical inference for discretely observed Markov jump processes. J. R. Statist. Soc. B 39(3), 395–410 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Cox, D.R., Miller, H.D.: The Theory of Stochastic Processes. Chapman and Hall, London (1965)

    MATH  Google Scholar 

  8. Fagan, A.M., Head, D., Shah, A.R., et. al: Decreased CSF A beta 42 correlates with brain atrophy in cognitively normal elderly. Ann Neurol. 65(2), 176–183 (2009)

    Google Scholar 

  9. Golub, G.H., Van Loan, C.F.: Matrix computations, vol. 3. JHU Press (2012)

    Google Scholar 

  10. Higham, N.: Functions of Matrices: Theory and Computation. SIAM Press (2008)

    Google Scholar 

  11. Hobolth, A., Jensen, J.L.: Statistical inference in evolutionary models of DNA sequences via the EM algorithm. Statistical Applications in Genetics and Molecular Biology 4(1) (2005)

    Google Scholar 

  12. Hobolth, A., Jensen, J.L.: Summary statistics for endpoint-conditioned continuous-time Markov chains. Journal of Applied Probability 48(4), 911–924 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Jackson, C.H.: Multi-state models for panel data: The MSM package for R. Journal of Statistical Software 38(8) (2011)

    Google Scholar 

  14. Jensen, A.: Markoff chains as an aid in the study of Markoff processes. Skand. Aktuarietidskr 36, 87–91 (1953)

    MathSciNet  MATH  Google Scholar 

  15. Kingman, S.: Glaucoma is second leading cause of blindness globally. Bulletin of the World Health Organization 82(11) (2004)

    Google Scholar 

  16. Leiva-Murillo, J.M., Rodrguez, A.A., Baca-Garca, E.: Visualization and prediction of disease interactions with continuous-time hidden Markov models. In: Advances in Neural Information Processing Systems (2011)

    Google Scholar 

  17. Liu, Y., Ishikawa, H., Chen, M., et al.: Longitudinal modeling of glaucoma progression using 2-dimensional continuous-time hidden Markov model. In: Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 444–451 (2013)

    Google Scholar 

  18. Liu, Y.Y., Li, S., Li, F., Song, L., Rehg, J.M.: Efficient learning of continuous-time hidden Markov models for disease progression. In: Proc. Twenty-Ninth Annual Conference on Neural Information Processing Systems (NIPS 15). Montreal, Canada (2015)

    Google Scholar 

  19. McGibbon, R.T., Pande, V.S.: Efficient maximum likelihood parameterization of continuous-time Markov processes. The Journal of Chemical Physics 143(3), 034,109 (2015)

    Article  Google Scholar 

  20. Metzner, P., Horenko, I., Schütte, C.: Generator estimation of Markov jump processes. Journal of Computational Physics 227, 353–375 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Metzner, P., Horenko, I., Schütte, C.: Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time. Physical Review E 76(066702) (2007)

    Google Scholar 

  22. Moler, C., Van Loan, C.: Nineteen dubious ways to compute the exponential of a matrix. SIAM Review 20(4), 801–836 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  23. Moler, C., Van Loan, C.: Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Review 45(1), 3–49 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nodelman, U., Shelton, C.R., Koller, D.: Expectation maximization and complex duration distributions for continuous time Bayesian networks. In: Proc. Uncertainty in AI (UAI 05) (2005)

    Google Scholar 

  25. Osborne, E.: On pre-conditioning of matrices. Journal of the ACM (JACM) 7(4), 338–345 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  26. Rabinar, L.R.: A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE 77(2) (1989)

    Google Scholar 

  27. Ross, S.M.: Stochastic Processes. John Wiley, New York (1983)

    MATH  Google Scholar 

  28. Tataru, P., Hobolth, A.: Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains. BMC Bioinformatics 12(465) (2011)

    Google Scholar 

  29. The Alzheimer’s Disease Neuroimaging Initiative: http://adni.loni.usc.edu

  30. Van Loan, C.: Computing integrals involving the matrix exponential. IEEE Trans. Automatic Control 23, 395–404 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  31. Wang, X., Sontag, D., Wang, F.: Unsupervised learning of disease progression models. Proceeding KDD 4(1), 85–94 (2014)

    Google Scholar 

  32. Wollstein, G., Kagemann, L., Bilonick, R., et al.: Retinal nerve fibre layer and visual function loss in glaucoma: the tipping point. Br J Ophthalmol 96(1), 47–52 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

Portions of this work were supported in part by NIH R01 EY13178-15 and by grant U54EB020404 awarded by the National Institute of Biomedical Imaging and Bioengineering through funds provided by the Big Data to Knowledge (BD2K) initiative (www.bd2k.nih.gov). The research was also supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR N00014-15-1-2340, NSF IIS-1218749, NSF CAREER IIS-1350983, and funding from the Georgia Tech Executive Vice President of Research Office and the Center for Computational Health.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Moreno .

Editor information

Editors and Affiliations

Appendix: Derivation of Vectorized Eigen

Appendix: Derivation of Vectorized Eigen

In [20, 21], it is stated without proof that the naïve Eigen is equivalent to Vectorized Eigen. Here we present the derivation. Let

$$\displaystyle{ \tau _{k,l}^{i,j}(t) =\sum _{ p=1}^{n}U_{ kp}U_{pi}^{-1}\sum _{ q=1}^{n}U_{ jq}U_{ql}^{-1}\varPsi _{ pq}(t) }$$
(40)

where the symmetric matrix Ψ(t) = [Ψ pq (t)] p, qS is defined as:

$$\displaystyle{ \varPsi _{pq}(t) = \left \{\begin{array}{@{}l@{\quad }l@{}} te^{t\lambda _{p}}\text{if }\lambda _{p} =\lambda _{q} \quad \\ \frac{e^{t\lambda _{p}}-e^{t\lambda _{q}}} {\lambda _{p}-\lambda _{q}} \text{if }\lambda _{p}\neq \lambda _{q}\quad \end{array} \right. }$$
(41)

Letting V = U −1, this is equivalent to

$$\displaystyle{ \tau _{k,l}^{i,j}(t) = [U[V _{ i}^{T}U_{ j}\circ \varPsi ]V ]_{kl} }$$
(42)

To see why, first, note that for the outer product,

$$\displaystyle{ V _{i}^{T}U_{ j}\circ \varPsi = \left (\begin{array}{ccc} U_{1,i}^{-1}U_{j,1}\varPsi _{1,1} & \cdots & U_{1,i}^{-1}U_{j,n}\varPsi _{1,n}\\ \vdots & & \vdots \\ U_{n,i}^{-1}U_{j,1}\varPsi _{n,1} & \cdots &U_{n,i}^{-1}U_{j,n}\varPsi _{n,n}\\ \end{array} \right ) }$$
(43)

Then

$$\displaystyle{ U[V _{i}^{T}U_{ j}\circ \varPsi ] = \left (\begin{array}{ccc} U_{1,1} & \cdots & U_{1,n}\\ \vdots & & \vdots \\ U_{n,1} & \cdots &U_{n,n} \end{array} \right )\left (\begin{array}{ccc} U_{1,i}^{-1}U_{j,1}\varPsi _{1,1} & \cdots & U_{1,i}^{-1}U_{j,n}\varPsi _{1,n}\\ \vdots & & \vdots \\ U_{n,i}^{-1}U_{j,1}\varPsi _{n,1} & \cdots &U_{n,i}^{-1}U_{j,n}\varPsi _{n,n}\\ \end{array} \right ) }$$
(44)
$$\displaystyle{ = \left (\begin{array}{ccc} \sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}U_{j,1}\psi _{p,1} & \cdots & \sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}U_{j,n}\psi _{p,n}\\ \vdots & & \vdots \\ \sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,1}\psi _{p,1} & \cdots &\sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,n}\psi _{p,n} \end{array} \right ) }$$
(45)
$$\displaystyle\begin{array}{rcl} & & U[V _{i}^{T}U_{ j}\circ \varPsi ]U^{-1} = \left (\begin{array}{ccc} \sum _{p=1}^{n}U_{ 1,p}U_{p,i}^{-1}U_{ j,1}\psi _{p,1} & \cdots & \sum _{p=1}^{n}U_{ 1,p}U_{p,i}^{-1}U_{ j,n}\psi _{p,n}\\ \vdots & & \vdots \\ \sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,1}\psi _{p,1} & \cdots &\sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,n}\psi _{p,n} \end{array} \right ) \cdot \\ & &\qquad \qquad \left (\begin{array}{ccc} U_{1,1}^{-1} & \cdots & U_{1,n}^{-1}\\ \vdots & & \vdots \\ U_{n,1}^{-1} & \cdots &U_{n,n}^{-1} \end{array} \right ) {}\end{array}$$
(46)
$$\displaystyle{ = \left (\begin{array}{ccc} \sum _{q=1}^{n}\sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}U_{j,q}U_{q,1}^{-1}\varPsi _{p,q} &\cdots & \sum _{q=1}^{n}\sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}U_{j,q}U_{q,n}^{-1}\varPsi _{p,q}\\ \vdots & & \vdots \\ \sum _{q=1}^{n}\sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,q}U_{q,1}^{-1}\varPsi _{p,q}&\cdots &\sum _{q=1}^{n}\sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}U_{j,q}U_{q,n}^{-1}\varPsi _{p,q}\\ \end{array} \right ) }$$
(47)
$$\displaystyle{ = \left (\begin{array}{ccc} \sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}\sum _{q=1}^{n}U_{j,q}U_{q,1}^{-1}\varPsi _{p,q} &\cdots & \sum _{p=1}^{n}U_{1,p}U_{p,i}^{-1}\sum _{q=1}^{n}U_{j,q}U_{q,n}^{-1}\varPsi _{p,q}\\ \vdots & & \vdots \\ \sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}\sum _{q=1}^{n}U_{j,q}U_{q,1}^{-1}\varPsi _{p,q}&\cdots &\sum _{p=1}^{n}U_{n,p}U_{p,i}^{-1}\sum _{q=1}^{n}U_{j,q}U_{q,n}^{-1}\varPsi _{p,q}\\ \end{array} \right ) }$$
(48)

So that

$$\displaystyle{ [U[V _{i}^{T}U_{ j} \circ \varPsi (t)]U^{-1}]_{ kl} =\sum _{ p=1}^{n}U_{ k,p}U_{p,i}^{-1}\sum _{ q=1}^{n}U_{ j,q}U_{q,l}^{-1}\varPsi _{ p,q}(t) }$$
(49)

as desired.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Liu, YY., Moreno, A., Li, S., Li, F., Song, L., Rehg, J.M. (2017). Learning Continuous-Time Hidden Markov Models for Event Data. In: Rehg, J., Murphy, S., Kumar, S. (eds) Mobile Health. Springer, Cham. https://doi.org/10.1007/978-3-319-51394-2_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-51394-2_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-51393-5

  • Online ISBN: 978-3-319-51394-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics