Skip to main content

Parameter Identifiability and Redundancy, with Applications to a General Class of Stochastic Carcinogenesis Models

  • Chapter
Systems Biology

Abstract

Models for complex biological systems may involve a large number of parameters. It may well be that some of these parameters (in particular any value of those parameters) cannot be derived from observed data via regression techniques. Such parameters are said to be unidentifiable, the remaining parameters being identifiable. Closely related to this idea is that of redundancy, that a set of parameters can be expressed in terms of some smaller set. Before data is analysed it is critical to determine which model parameters are identifiable or redundant to avoid ill-defined and poorly-convergent regression. This problem has been considered from a number of points of view in the literature. One distinct recent application has been to biologically-based cancer models. Heidenreich et al. (Risk Anal 1997 17 391–399) considered parameter identifiability in the context of the two-mutation cancer model and demonstrated that combinations of all but two of the model parameters are identifiable. Here we outline general considerations on parameter identifiability, and introduce the notion of weak local identifiability and gradient weak local identifiability. These are based on local properties of the likelihood, in particular the rank of the Hessian matrix. We relate these to the notions of parameter identifiability and redundancy previously introduced by Rothenberg (Econometrica 1971 39 577–591) and Catchpole and Morgan (Biometrika 1997 84 187–196). Within the widely-used exponential family, parameter irredundancy, local identifiability, gradient weak local identifiability and weak local identifiability are shown to be largely equivalent. We consider applications to a recently developed class of cancer models of Little and Wright (Math Biosciences 2003 183 111–134) and Little et al. (J Theoret Biol 2008 254 229–238) that generalize a large number of other recently used quasi-biological cancer models, in particular the two-mutation model of Heidenreich et al. (Risk Anal 1997 17 391–399). We show that in the simpler model proposed by Little and Wright (Math Biosciences 2003 183 111–134) the number of identifiable combinations of parameters is at most two less than the number of biological parameters, thereby generalizing previous results of Heidenreich et al. (Risk Anal 1997 17 391–399) for the two-mutation model. For the more general model of Little et al. (J Theoret Biol 2008 254 229–238) the number of identifiable combinations of parameters is at most \( r + 1 \) less than the number of biological parameters, where \( r \) is the number of destabilization types (types of genomic instability), thereby also generalizing all these results. Numerical evaluations suggest that these bounds are sharp. We also identify particular combinations of identifiable parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abbreviations

GLM:

Generalized linear model

GSD:

Geometric standard deviation

References

  1. Rothenberg TJ (1971) Identification in parametric models. Econometrica 39(3):577–591

    Article  Google Scholar 

  2. Silvey SD (1975) Statistical inference. London, Chapman and Hall, pp 1–191

    Google Scholar 

  3. Catchpole EA, Morgan BJT (1997) Detecting parameter redundancy. Biometrika 84(1):187–196

    Article  Google Scholar 

  4. Jacquez JA, Perry T (1990) Parameter estimation: local identifiability of parameters. Am J Physiol 258(4 Pt 1):E727–E736

    CAS  PubMed  Google Scholar 

  5. Little MP, Heidenreich WF, Li G (2010) Parameter identifiability and redundancy: theoretical considerations. PLoS One 5(1):e8915

    Article  PubMed Central  PubMed  Google Scholar 

  6. Chappell MJ, Gunn RN (1998) A procedure for generating locally identifiable reparameterisations of unidentifiable non-linear systems by the similarity transformation approach. Math Biosci 148(1):21–41

    Article  CAS  PubMed  Google Scholar 

  7. Evans ND, Chappell MJ (2000) Extensions to a procedure for generating locally identifiable reparameterisations of unidentifiable systems. Math Biosci 168(2):137–159

    Article  CAS  PubMed  Google Scholar 

  8. Little MP, Wright EG (2003) A stochastic carcinogenesis model incorporating genomic instability fitted to colon cancer data. Math Biosci 183(2):111–134

    Article  CAS  PubMed  Google Scholar 

  9. Little MP, Vineis P, Li G (2008) A stochastic carcinogenesis model incorporating multiple types of genomic instability fitted to colon cancer data. J Theor Biol 254(2):229–238, Sept 21; Nov 21; 255(2):268

    Google Scholar 

  10. Armitage P, Doll R (1954) The age distribution of cancer and a multi-stage theory of carcinogenesis. Br J Cancer 8(1):1–12

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  11. Moolgavkar SH, Venzon DJ (1979) Two-event models for carcinogenesis: incidence curves for childhood and adult tumors. Math Biosci 47(1–2):55–77

    Article  Google Scholar 

  12. Little MP (1995) Are two mutations sufficient to cause cancer? Some generalizations of the two-mutation model of carcinogenesis of Moolgavkar, Venzon, and Knudson, and of the multistage model of Armitage and Doll. Biometrics 51(4):1278–1291

    Article  CAS  PubMed  Google Scholar 

  13. Nowak MA, Komarova NL, Sengupta A, Jallepalli PV, Shih IM, Vogelstein B et al (2002) The role of chromosomal instability in tumor initiation. Proc Natl Acad Sci USA 99(25):16226–16231 Dec 10

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  14. Heidenreich WF (1996) On the parameters of the clonal expansion model. Radiat Environ Biophys 35(2):127–129

    Article  CAS  PubMed  Google Scholar 

  15. Heidenreich WF, Luebeck EG, Moolgavkar SH (1997) Some properties of the hazard function of the two-mutation clonal expansion model. Risk Anal 17(3):391–399

    Article  CAS  PubMed  Google Scholar 

  16. Little MP, Heidenreich WF, Li G (2009) Parameter identifiability and redundancy in a general class of stochastic carcinogenesis models. PLoS One 4(12):e8520

    Article  PubMed Central  PubMed  Google Scholar 

  17. McCullagh P, Nelder JA (1989) Generalized linear models. Monographs on statistics and applied probability, vol 37, 2nd edn. Chapman and Hall/CRC, Boca Raton, pp 1–526

    Google Scholar 

  18. Rudin W (1976) Principles of mathematical analysis, 3rd edn. McGraw Hill, Auckland, pp 1–352

    Google Scholar 

  19. Viallefont A, Lebreton J-D, Reboulet A-M, Gory G (1998) Parameter identifiability and model selection in capture-recapture models: a numerical approach. Biometrical J 40:313–325

    Article  Google Scholar 

  20. Dickson LE (1926) Modern algebraic theories. B.J.H. Sanborn, Chicago, pp 1–273

    Google Scholar 

  21. Little MP, Li G (2007) Stochastic modelling of colon cancer: is there a role for genomic instability? Carcinogenesis 28(2):479–87

    Article  CAS  PubMed  Google Scholar 

  22. Armitage P, Doll R (1957) A two-stage theory of carcinogenesis in relation to the age distribution of human cancer. Br J Cancer 11(2):161–169

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  23. Cole D, Morgan BJT, Titterington DM (2010) Determining the parametric structure of models. Math Biosci 228(1):16–30

    Google Scholar 

  24. Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in Fortran 77. The art of scientific computing, 2nd edn. Cambridge University Press, Cambridge, pp 1–934

    Google Scholar 

  25. Golub GH, van Loan CF (1996) Matrix computations, 3rd edn. The Johns Hopkins University Press, Baltimore, pp 1–728

    Google Scholar 

  26. Numerical Algorithms Group (2009) NAG FORTRAN Library Mark 22, Oxford

    Google Scholar 

  27. Hanin LG, Yakovlev AY (1996) A nonidentifiability aspect of the two-stage model of carcinogenesis. Risk Anal 16(5):711–715

    Article  CAS  PubMed  Google Scholar 

  28. Hanin LG (2002) Identification problem for stochastic models with application to carcinogenesis, cancer detection and radiation biology. Discrete Dyn Nat Soc 7(3):177–189

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the Intramural Research Program of the National Institutes of Health, the National Cancer Institute, Division of Cancer Epidemiology and Genetics.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark P. Little .

Editor information

Editors and Affiliations

Appendices

Appendix A

Proof of Theorem 1

In this Section we outline a proof of Theorem 1 in the main text. To prove this result we need the following lemma of Rudin [18] (p.229).

Lemma A1

Suppose \( m,n,r \) are non-negative integers such that (s.t.) \( m \ge r \), \( n \ge r \) and \( F \) is a \( C^{1} \) function \( E \subset R^{n} \to R^{m} \) where \( E \) is an open set. Suppose that \( rk(F^{\prime } (x)) = r \) \( \forall x \in E \). Fix \( a \in E \) and put \( A = F^{\prime } (a) \), and let \( Y_{1} = A(R^{n} ) \) and let \( P:R^{m} \to R^{m} \) be a linear projection operator (\( P^{2} = P \)) s.t. \( Y_{1} = P(R^{m} ) \) and \( Y_{2} = null(P) \). Then \( \exists U,V \subset R^{n} \), open sets and a bijective \( C^{1} \) function \( H:V \to U \) whose inverse is also \( C^{1} \) and s.t. \( F(H(x)) = Ax + \varphi (Ax), \) \( \forall x \in V \) where \( \varphi :AV \subset Y_{1} \to Y_{2} \) is a \( C^{1} \) function.

We now restate Theorem 1 here.

Theorem A2

Suppose that the log-likelihood \( L(x|\theta ) \) is \( C^{2} \) as a function of the parameter vector \( \theta \in \Upomega \subset R^{p} \), and for all \( x = (x_{1} , \ldots ,x_{n} ) \in \Upsigma^{n} \).

  1. 1.

    Suppose that for some \( x \) and \( \theta \in \text{int} (\Upomega ) \) it is the case that \( rk\left[ {\left( {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}} \right)_{i,j = 1}^{p} } \right] = p \). Then turning points of the likelihood in the neighborhood of \( \theta \) are isolated, i.e., there is an open neighborhood \( N \in \aleph_{\theta } \subset \Upomega \) for which there is at most one \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) that satisfies \( \left. {\left( {\frac{\partial L(x|\theta )}{{\partial \theta_{i} }}} \right)_{i = 1}^{p} } \right|_{{\theta = \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } }} = 0 \).

  2. 2.

    Suppose that for some \( x \) and \( \theta \in \text{int} (\Upomega ) \) it is the case that \( rk\left[ {\left( {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}} \right)_{i,j = 1}^{p} } \right] = p \) then local maxima of the likelihood in the neighborhood of \( \theta \) are isolated, i.e., there is an open neighborhood \( N \in \aleph_{\theta } \subset \Upomega \) for which there is at most one \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) that is a local maximum of \( L(x|\theta ) \).

  3. 3.

    Suppose that for some \( x \) and all \( \theta \in \text{int} (\Upomega ) \) it is the case that \( rk\left[ {\left( {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}} \right)_{i,j = 1}^{p} } \right] = r < p \) then all local maxima of the likelihood in \( \text{int} (\Upomega ) \) are not isolated, as indeed are all \( \theta \in \text{int} (\Upomega ) \) for which \( \left( {\frac{\partial L(x|\theta )}{{\partial \theta_{i} }}} \right)_{i = 1}^{p} = 0 \).

Proof

  1. 1.

    Let \( F:\Upomega \subset R^{p} \to R^{p} \) be defined by \( F(\theta_{1} ,\theta_{2} , \ldots ,\theta_{p} ) = \left( {\frac{\partial L(x|\theta )}{{\partial \theta_{1} }},\frac{\partial L(x|\theta )}{{\partial \theta_{2} }}, \ldots ,\frac{\partial L(x|\theta )}{{\partial \theta_{p} }}} \right) \). Since \( L \) is \( C^{2} \), \( F \) is \( C^{1} \) on \( \text{int} (\Upomega ) \subset R^{p} \). By assumption \( \frac{{\partial F(\theta )_{i} }}{{\partial \theta_{j} }} = \left( {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{j} \partial \theta_{i} }}} \right)_{i,j = 1}^{p} \) is of full rank at \( \theta \). By the inverse function theorem [18] (p.221–223) there are open \( N,M \subset R^{p} \) such that \( \theta \in N \) and a \( C^{1} \) bijective function \( G:M \to N \) such that \( G(F(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } )) = \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \) for all \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \). In particular there can be at most a single \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) for which \( F(\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } ) = 0 \). QED.

  2. 2.

    By (1) there is an open neighborhood \( N \in \aleph_{\theta } \subset \Upomega \) for which if \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) is such that \( \left. {\left( {\frac{\partial L(x|\theta )}{{\partial \theta_{i} }}} \right)_{i = 1}^{p} } \right|_{{\theta = \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } }} = 0 \) then for \( \theta ' \ne \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) \( \left. {\left( {\frac{\partial L(x|\theta )}{{\partial \theta_{i} }}} \right)_{i = 1}^{p} } \right|_{\theta = \theta '} \ne 0 \). Suppose now that \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \in N \) is a local maximum of \( L(x|\theta ) \). Any member of this neighborhood other than \( \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{\theta } \) cannot be a turning point, and so by the Mean Value Theorem [18] (p.107) cannot be a local maximum. QED.

  3. 3.

    Let \( F:\Upomega \subset R^{p} \to R^{p} \) be defined by \( F(\theta_{1} ,\theta_{2} , \ldots ,\theta_{p} ) = \left( {\frac{\partial L(x|\theta )}{{\partial \theta_{1} }},\frac{\partial L(x|\theta )}{{\partial \theta_{2} }}, \ldots ,\frac{\partial L(x|\theta )}{{\partial \theta_{p} }}} \right) \). Since \( L \) is \( C^{2} \), \( F \) is \( C^{1} \) on \( \text{int} (\Upomega ) \subset R^{p} \). By assumption \( rk(F) = rk\left[ {\left( {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}} \right)_{i,j = 1}^{p} } \right] = r \) for all \( \theta \in \text{int} (\Upomega ) \subset R^{p} \). Suppose that \( \theta_{0} \in \text{int} (\Upomega ) \) is a local maximum of \( L \). Let \( A = \left. {\frac{\partial F}{\partial \theta }} \right|_{{\theta = \theta_{0} }} :R^{p} \to R^{p} \) (\( A \in L(R^{p} ,R^{p} ) \)), and choose some arbitrary projection \( P \in L(R^{p} ,R^{p}) \) s.t. \( P(R^{p} ) = Y_{1} = A(R^{p} ) \), and let \( Y_{2} = null(P) \). By Lemma A1 there are open sets \( U,V \subset R^{p} \) with \( \theta_{0} \in U \subset \text{int} (\Upomega ) \) and a bijective \( C^{1} \) mapping with \( C^{1} \) inverse \( H:V \to U \) s.t. \( F(y) = AH^{ - 1} y + \varphi (AH^{ - 1} y), \) \( \forall y \in V \) where \( \varphi :AV \subset Y_{1} \to Y_{2} \) is a \( C^{1} \) function.

Since \( \theta_{0} \in \text{int} (\Upomega ) \) is a local maximum of \( L(x;\theta ) \), by the Mean Value Theorem [18] (p.107) \( F(\theta_{0} ) = 0 \). Now choose some non-trivial vector \( \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} \in null(A) \) and define a function, as we can, on some interval \( \delta :( - \varepsilon ,\varepsilon ) \to R^{p} \) by \( \delta (t) = H(H^{ - 1} (\theta_{0} ) + t\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} ) \). Because \( H:V \to U \) is bijective and \( \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} \) is non-trivial \( \delta (t) = \delta (t^{\prime } ) \Leftrightarrow t = t^{\prime } \). Also, it is the case that:

$$ \begin{array}{*{20}l} {F(\delta (t)) = AH^{{ - 1}} (H[H^{{ - 1}} (\theta _{0} ) + t\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} ]) + \varphi (AH^{{ - 1}} (H[H^{{ - 1}} (\theta _{0} ) + t\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} ])) = } \hfill \\ {A[H^{{ - 1}} (\theta _{0} ) + t\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} ] + \varphi (A[H^{{ - 1}} (\theta _{0} ) + t\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle\thicksim}$}}{k} ]) = A[H^{{ - 1}} (\theta _{0} )] + \varphi (A[H^{{ - 1}} (\theta _{0} )]) = F(\theta _{0} ) = 0} \hfill \\ \end{array}$$
(11.A1)

Define \( G:( - \varepsilon ,\varepsilon ) \to R \) by \( G(t) = L(\delta (t)) = L((\delta_{1} (t),\delta_{2} (t), \ldots ,\delta_{n} (t))) \). By the Chain Rule [18] (p.215) \( \frac{dG}{dt} = \sum\limits_{i = 1}^{p} {\frac{\partial L(x|\theta )}{{\partial \theta_{i} }}\frac{{d\delta_{i} }}{dt}} = 0 \) \( \forall t \in ( - \varepsilon ,\varepsilon ) \). Finally, by the Mean Value Theorem [18] (p.107) \( G \) must be constant; in particular \( L(x|\delta (t)) = L(x|\delta (0)) = L(x|\theta_{0} ) \) \( \forall t \in ( - \varepsilon ,\varepsilon ) \) and so all points \( \delta (t) \) must also be local maxima of \( L(x|\theta ) \). Therefore \( \theta_{0} \) is not an isolated local maximum. Since all we used about \( \theta_{0} \in \text{int} (\Upomega ) \) was that \( F(\theta_{0} ) = 0 \), \( F((\theta_{0i} )_{i = 1}^{p} ) = \left( {\frac{{\partial L(x|\theta_{0} )}}{{\partial \theta_{01} }},\frac{{\partial L(x|\theta_{0} )}}{{\partial \theta_{02} }}, \ldots ,\frac{{\partial L(x|\theta_{0} )}}{{\partial \theta_{0p} }}} \right) = 0 \), the above argument also shows that turning points cannot be isolated: \( F(\delta (t)) = 0 \). QED.

Appendix B

2.1 Specification of Embedded Exponential Family Model

In this Section we outline the specification of an embedding of a stochastic cancer model in a general class of statistical models, the so-called exponential family [17]. This is often done in fitting cancer models to epidemiological and biological data (e.g., see references [8, 9, 16, 21]). Recall that a model is a member of the exponential family if the observed data \( x = (x_{l} )_{l = 1}^{n} \in \sum^{n} \) is such that the log-likelihood is given by \( L(x|\theta ) = \sum\limits_{l = 1}^{n} {\left[ {\frac{{x_{l} \varsigma_{l} - b(\varsigma_{l} )}}{a(\phi )} + c(x_{l} ,\phi )} \right]} \) for some functions \( a(\phi ),b(\varsigma ),c(x,\phi ) \). We assume that the natural parameters \( \varsigma_{l} = \varsigma_{l} [(\theta_{i} )_{i = 1}^{p} ,z_{l} ] \) are functions of the model parameters \( (\theta_{i} )_{i = 1}^{p} \) and some auxiliary data \( (z_{l} )_{l = 1}^{n} \), and that \( \mu_{l} = b^{\prime } (\varsigma_{l} [(\theta_{i} )_{i = 1}^{p} ,z_{l} ]) = z_{l} \cdot h[(\theta_{i} )_{i = 1}^{p} ,y_{l} ] \). Here \( h[(\theta_{i} )_{i = 1}^{p} ,y_{l} ] \) is the cancer hazard function (for example, that of Little et al. [9], as also specified in Appendix C), \( (y_{l} )_{l = 1}^{n} \) are some further auxiliary data, and we assume that the \( (z_{l} )_{l = 1}^{n} \) are all non-zero. [Note: this is not necessarily a GLM.] In this case it is seen that

$$ \frac{{\partial ^{2} L(x|\theta )}}{{\partial \theta _{i} \partial \theta _{j} }} = \sum\limits_{{l = 1}}^{n} {\left[ {\begin{array}{*{20}l} {\frac{{[x_{l} - b^{\prime}(\varsigma _{l} )]z_{l} }}{{a(\phi )b^{\prime\prime}(\varsigma _{l} )}}\frac{{\partial ^{2} h(\theta ,y_{l} )}}{{\partial \theta _{i} \partial \theta _{j} }}} \hfill \\ { - \frac{{z_{l} ^{2} }}{{a(\phi )}}\frac{{\partial h(\theta ,y_{l} )}}{{\partial \theta _{i} }}\frac{{\partial h(\theta ,y_{l} )}}{{\partial \theta _{j} }}\left\{ {\frac{{[b^{\prime\prime}(\varsigma _{l} )]^{2} + b^{\prime\prime\prime}(\varsigma _{l} )[x_{l} - b^{\prime}(\varsigma _{l} )]}}{{[b^{\prime\prime}(\varsigma _{l} )]^{3} }}} \right\}} \hfill \\ \end{array} } \right]} $$
(11.B1)

so that the Fisher information matrix is given by

$$ I(\theta )_{ij} = - E_{\theta } \left[ {\frac{{\partial^{2} L(x|\theta )}}{{\partial \theta_{i} \partial \theta_{j} }}} \right] = \frac{1}{a(\phi )}\sum\limits_{l = 1}^{n} {\frac{{z_{l}^{2} }}{{b^{\prime \prime } (\varsigma_{l} )}}\frac{{\partial h(\theta ,y_{l} )}}{{\partial \theta_{i} }}\frac{{\partial h(\theta ,y_{l} )}}{{\partial \theta_{j} }}} $$
(11.B2)

Appendix C

3.1 Derivation of Hazard Function in Terms of Specific Parameter Combinations for the Cancer Model of Little et al. [9]

In this Appendix we derive the hazard function for the cancer model of Little et al. [9] and show that it can be written in terms of certain combinations of parameters, given in Table 11.2. The hazard function is defined as:

$$ h(t) = - \frac{d}{dt}\ln \psi \left( {1,1, \ldots ,1,0;t,0} \right) $$
(11.C1)

where

$$ \begin{array}{*{20}l} {\psi (y_{1,0,0} ,y_{2,0,0} , \ldots ,y_{k - 1,0,0} ,y_{0,1,1} , \ldots ,y_{k - 1,1,1} ,y_{0,2,1} ,y_{1,2,1} , \ldots ,y_{{k - 1,m_{r} ,r}} ,y_{k} ;t,0)} \hfill \\ { \equiv \psi (t)} \hfill \\ { = \sum\limits_{n} {y_{1,0,0}^{{n_{1,0,0} }} \cdot \ldots \cdot y_{k - 1,0,0}^{{n_{k - 1,0,0} }} \cdot y_{0,1,1}^{{n_{0,1,1} }} \cdot \ldots \cdot y_{k - 1,1,1}^{{n_{k - 1,1,1} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}}^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k}^{{n_{k} }} \,\times\,} } \hfill \\ {\quad \quad P\left( {Y_{1,0,0} (t) = n_{1,0,0} , \ldots ,Y_{k} (t) = n_{k} \left| {N(0) = X(0),Y_{1,0,0} (0) = \ldots = Y_{k} (0) = 0} \right.} \right)} \hfill \\ \end{array} $$
(11.C2)

is the full probability generating function (PGF) starting with X(0) cell(s) in the normal compartment at time 0. The number of biological parameters in this specific model is summarized in Table 11.1.

By straightforward generalizations of material in Little and Wright [8] (given in Appendix D) it is seen that \( \psi \) satisfies a Kolmogorov forward equation:

$$ \begin{array}{*{20}l} {\frac{{d\psi }}{{dt}} = \psi \cdot [y_{{1,0,0}} - 1] \cdot X(t) \cdot M(0,0,0)(t) + \psi \cdot \sum\limits_{{d = 1}}^{r} {[y_{{0,1,d}} - 1] \cdot X(t) \cdot A(0,0,d)(t)} \,+ } \hfill \\ {\sum\limits_{{1 \le \alpha \le k - 1}} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,0,0}} \cdot [D(\alpha ,0,0)(t) + G(\alpha ,0,0)(t) + M(\alpha ,0,0)(t)} \hfill \\ { + \sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t)} ] + y_{{\alpha ,0,0}} ^{2} \cdot G(\alpha ,0,0)(t) + D(\alpha ,0,0)(t)} \hfill \\ { + y_{{\alpha ,0,0}} \cdot y_{{\alpha + 1,0,0}} \cdot M(\alpha ,0,0)(t) + \sum\limits_{{d' = 1}}^{r} {y_{{\alpha ,0,0}} \cdot y_{{\alpha ,1,d'}} \cdot A(\alpha ,0,d')(t)} } \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }}{{\partial y_{{\alpha ,0,0}} }}} } + \hfill \\ {\sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,\beta ,d}} \cdot [D(\alpha ,\beta ,d)(t) + G(\alpha ,\beta ,d)(t) + M(\alpha ,\beta ,d)(t)} \hfill \\ { + A(\alpha ,\beta ,d)(t)] + y_{{\alpha ,\beta ,d}} ^{2} \cdot G(\alpha ,\beta ,d)(t) + D(\alpha ,\beta ,d)(t)} \hfill \\ { + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha + 1,\beta ,d}} \cdot M(\alpha ,\beta ,d)(t) + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha ,\beta + 1,d}} \cdot A(\alpha ,\beta ,d)(t)} \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }}{{\partial }}} } \hfill \\ \end{array} $$
(11.C3)

with the conventions that \( y_{\alpha ,\beta ,d} \equiv 0 \) for \( \beta > m_{d} \), \( A(\alpha ,\beta ,d) \equiv 0 \) for \( \beta \ge m_{d} \) and \( y_{k,\beta ,d} \equiv y_{k} \) for all defined \( \beta \) and \( d \). We solve the equation by means of Cauchy’s method of characteristics. Suppose \( y_{\alpha ,\beta ,d} \equiv y_{\alpha ,\beta ,d} (u) \) and \( t \equiv t(u) \), then \( \psi \equiv \psi (y_{\alpha ,\beta ,d} (u),t(u)) \). This implies that:

$$ \begin{array}{*{20}l} {\frac{{\partial \psi }}{{\partial u}} = \frac{{\partial \psi }}{{\partial t}} \cdot \frac{{dt}}{{du}} + \frac{{\partial \psi }}{{\partial y_{k} }} \cdot \frac{{dy_{k} }}{{du}} + \sum\limits_{{1 \le \alpha \le k - 1}} {\frac{{\partial \psi }}{{\partial y_{{\alpha ,0,0}} }} \cdot \frac{{dy_{{\alpha ,0,0}} }}{{du}}} + \sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\frac{{\partial \psi }}{{\partial y_{{\alpha ,\beta ,d}} }} \cdot \frac{{dy_{{\alpha ,\beta ,d}} }}{{du}}} } \hfill \\ { = \frac{{dt}}{{du}} \cdot \left[ {\begin{array}{*{20}l} {\psi \cdot [y_{{1,0,0}} - 1] \cdot X(t) \cdot M(0,0,0)(t) + \psi \cdot \sum\limits_{{d = 1}}^{r} {[y_{{0,1,d}} - 1] \cdot X(t) \cdot A(0,0,d)(t)} + } \hfill \\ {\sum\limits_{{1 \le \alpha \le k - 1}} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,0,0}} \cdot [D(\alpha ,0,0)(t) + G(\alpha ,0,0)(t) + M(\alpha ,0,0)(t)} \hfill \\ { + \sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t)} ] + y_{{\alpha ,0,0}} ^{2} \cdot G(\alpha ,0,0)(t) + D(\alpha ,0,0)(t)} \hfill \\ { +\, y_{{\alpha ,0,0}} \cdot y_{{\alpha + 1,0,0}} \cdot M(\alpha ,0,0)(t) + \sum\limits_{{d' = 1}}^{r} {y_{{\alpha ,0,0}} \cdot y_{{\alpha ,1,d'}} \cdot A(\alpha ,0,d')(t)} } \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }}{{\partial y_{{\alpha ,0,0}} }}} } + \hfill \\ {\sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,\beta ,d}} \cdot [D(\alpha ,\beta ,d)(t) + G(\alpha ,\beta ,d)(t) + M(\alpha ,\beta ,d)(t)} \hfill \\ { + A(\alpha ,\beta ,d)(t)] + y_{{\alpha ,\beta ,d}} ^{2} \cdot G(\alpha ,\beta ,d)(t) + D(\alpha ,\beta ,d)(t)} \hfill \\ { + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha + 1,\beta ,d}} \cdot M(\alpha ,\beta ,d)(t) + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha ,\beta + 1,d}} \cdot A(\alpha ,\beta ,d)(t)} \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }}{{\partial y_{{\alpha ,\beta ,d}} }}} } \hfill \\ \end{array} } \right]} \hfill \\ { + \frac{{\partial \psi }}{{\partial y_{k} }} \cdot \frac{{dy_{k} }}{{du}} + \sum\limits_{{1 \le \alpha \le k - 1}} {\frac{{\partial \psi }}{{\partial y_{{\alpha ,0,0}} }} \cdot \frac{{dy_{{\alpha ,0,0}} }}{{du}}} + \sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\frac{{\partial \psi }}{{\partial y_{{\alpha ,\beta ,d}} }} \cdot \frac{{dy_{{\alpha ,\beta ,d}} }}{{du}}} } \hfill \\ \end{array} $$
(11.C4)

A solution is therefore given by:

$$ \begin{aligned} \frac{\partial \psi }{\partial u} &= \psi \cdot [y_{1,0,0} (u) - 1] \cdot X(u) \cdot M(0,0,0)(u)\\ &+ \psi \cdot \sum\limits_{d = 1}^{r} {[y_{0,1,d} (u) - 1] \cdot X(u) \cdot A(0,0,d)(u)}\end{aligned} $$
(11.C5)
$$ \frac{{\partial t}}{{\partial u}} = 1 $$
(11.C6)
$$ \frac{{\partial y_{k} }}{{\partial u}} = 0 $$
(11.C7)

and for \( d = 0: \)

$$ \begin{array}{*{20}l} {\frac{{dy_{{\alpha ,0,0}} }}{{du}} = y_{{\alpha ,0,0}} \cdot [D(\alpha ,0,0)(t) + G(\alpha ,0,0)(t) + M(\alpha ,0,0)(t)} \hfill \\ \quad \quad \quad { + \sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t)} ] - y_{{\alpha ,0,0}} ^{2} \cdot G(\alpha ,0,0)(t) - D(\alpha ,0,0)(t)} \hfill \\ \quad \quad \quad { - y_{{\alpha ,0,0}} \cdot y_{{\alpha + 1,0,0}} \cdot M(\alpha ,0,0)(t) - \sum\limits_{{d' = 1}}^{r} {y_{{\alpha ,0,0}} \cdot y_{{\alpha ,1,d'}} \cdot A(\alpha ,0,d')(t)} } \hfill \\ \end{array} $$
(11.C8)

while for \( d \ne 0 \):

$$\begin{aligned}\frac{{dy_{\alpha ,\beta ,d} }} {{du}} & = y_{\alpha ,\beta ,d} \cdot [D(\alpha ,\beta ,d)(t) + G(\alpha ,\beta ,d)(t) + M(\alpha ,\beta ,d)(t) \\ & \quad + A(\alpha ,\beta ,d)(t)] - y_{\alpha ,\beta ,d}^2 \cdot G(\alpha ,\beta ,d)(t) - D(\alpha ,\beta ,d)(t) \\ & \quad- y_{\alpha ,\beta ,d} \cdot y_{\alpha + 1,\beta ,d} \cdot M(\alpha ,\beta ,d)(t) - y_{\alpha ,\beta ,d} \cdot y_{\alpha ,\beta + 1,d} \cdot A(\alpha ,\beta ,d)(t) \\ \end{aligned} $$
(11.C9)

For the hazard, a solution is required for \( \psi (1,1,1, \ldots ,1,0;t,0) \), i.e., \( \psi (y_{1,0,0} = 1,y_{2,0,0} = 1,y_{3,0,0} = 1, \ldots ,y_{{k - 1,m_{r} ,r}} = 1,y_{k} = 0;t,s = 0) \), so that a particular characteristic must have the boundary value \( y_{\alpha ,\beta ,d} (t) = 1 \) and \( y_{k} (t) = 0 \) [implying by (11.C7) \( y_{k} (u) \equiv 0 \)], so that \( y_{\alpha ,\beta ,d} (u) \) is a function of both \( u \) and \( t \), i.e., \( y_{\alpha ,\beta ,d} (u) \equiv y_{\alpha ,\beta ,d} (u,t) \). Integrating (11.C5) over \( u \in [0,t] \) yields

$$ \psi (t) = \exp \left\{ {\int\limits_{0}^{t} {\left[ {\begin{array}{*{20}l} {\left\{ {y_{{1,0,0}} (u,t)-1} \right\} \cdot X(u) \cdot M(0,0,0)(u)} \hfill \\ { + \sum\limits_{{d = 1}}^{r} {\left\{ {y_{{0,1,d}} (u,t) - 1} \right\}} \cdot X(u) \cdot A(0,0,d)(u)} \hfill \\ \end{array} } \right]\;du} } \right\} $$
(11.C10)

Assume now that the model parameters \( G(\alpha ,\beta ,d)(t) \), \( D(\alpha ,\beta ,d)(t) \), \( M(\alpha ,\beta ,d)(t) \), \( A(\alpha ,\beta ,d)(t) \) and \( X(t) \) are constant over time. By substituting \( z_{\alpha ,\beta ,d} (u,t) = [y_{\alpha ,\beta ,d} (u,t) - 1] \cdot G(\alpha ,\beta ,d) \) into (11.C8) and (11.C9), the following can be obtained

$$ \left\{ {\begin{array}{*{20}l} {\frac{{dz_{{\alpha ,\beta ,d}} }}{{ds}} = - z_{{\alpha ,\beta ,d}} ^{2} + z_{{\alpha ,\beta ,d}} \cdot N\left[ {\alpha ,\beta ,d,z_{{\alpha ,\beta + 1,d}} ,z_{{\alpha + 1,\beta ,d}} } \right]\quad \quad when\;d \ne 0} \hfill \\ {\quad \quad +\,P\left[ {\alpha ,\beta ,d,z_{{\alpha ,\beta + 1,d}} ,z_{{\alpha + 1,\beta ,d}} } \right]} \hfill \\ {\frac{{dz_{{\alpha ,0,0}} }}{{ds}} = - z_{{\alpha ,0,0}} ^{2} + z_{{\alpha ,0,0}} \cdot N'\left[ {\alpha ,z_{{\alpha ,1,1}} , \cdots ,z_{{\alpha ,1,r}} ,z_{{\alpha + 1,0,0}} } \right]\quad when\;d = 0} \hfill \\ {\quad \quad +\,P'\left[ {\alpha ,z_{{\alpha ,1,1}} , \cdots ,z_{{\alpha ,1,r}} ,z_{{\alpha + 1,0,0}} } \right]} \hfill \\ \end{array} } \right. $$
(11.C11)

where

$$N\left[{\alpha ,\beta ,d,v,w} \right] = \left\{\begin{array}{ll} D(\alpha ,\beta ,d) - G(\alpha ,\beta ,d) - w \cdot \frac{{M(\alpha,\beta ,d)}}{{G(\alpha + 1,\beta ,d)}} \hfill \\ - v \cdot \frac{{A(\alpha ,\beta ,d)}}{{G(\alpha ,\beta + 1,d)}} & {(\alpha < k - 1,\beta < m_{d} )}\hfill \\ {D(\alpha ,\beta ,d) - G(\alpha,\beta ,d) + M(\alpha ,\beta ,d)} \hfill \\ { - v \cdot \frac{{A(\alpha ,\beta ,d)}}{{G(\alpha ,\beta + 1,d)}}} & {(\alpha = k - 1,\beta < m_{d} )}\\ {D(\alpha ,\beta ,d) - G(\alpha ,\beta ,d) - w \cdot \frac{{M(\alpha ,\beta ,d)}}{{G(\alpha + 1,\beta ,d)}}} & {(\alpha < k - 1,\beta = m_{d} )} \hfill \\ D(\alpha,\beta ,d) - G(\alpha ,\beta ,d) + M(\alpha ,\beta ,d) & {(\alpha = k - 1,\beta = m_{d} )}\\ \end{array}\right. $$
(11.C12)
$$ P\left[ {\alpha ,\beta ,d,v,w} \right] = \left\{ {\begin{array}{*{20}c} { - G(\alpha ,\beta ,d) \cdot \left[ {w \cdot \frac{{M(\alpha ,\beta ,d)}}{{G(\alpha + 1,\beta ,d)}} + v \cdot \frac{{A(\alpha ,\beta ,d)}}{{G(\alpha ,\beta + 1,d)}}} \right]} \hfill & {(\alpha < k - 1,\beta < m_{d} )} \hfill \\ {G(\alpha ,\beta ,d) \cdot \left[ {M(\alpha ,\beta ,d) - v \cdot \frac{{A(\alpha ,\beta ,d)}}{{G(\alpha ,\beta + 1,d)}}} \right]} \hfill & {(\alpha = k - 1,\beta < m_{d} )} \hfill \\ { - w \cdot \frac{{G(\alpha ,\beta ,d) \cdot M(\alpha ,\beta ,d)}}{{G(\alpha + 1,\beta ,d)}}} \hfill & {(\alpha < k - 1,\beta = m_{d} )} \hfill \\ {G(\alpha ,\beta ,d) \cdot M(\alpha ,\beta ,d)} \hfill & {(\alpha = k - 1,\beta = m_{d} )} \hfill \\ \end{array} } \right. $$
(11.C13)
$$ N^{\prime}\left[ {\alpha ,v_{1} , \cdots ,v_{r} ,w} \right] = \left\{ {\begin{array}{*{20}l} {D(\alpha ,0,0) - G(\alpha ,0,0) - w \cdot \frac{{M(\alpha ,0,0)}}{{G(\alpha + 1,0,0)}}} \hfill \\ { - \sum\limits_{{d = 1}}^{r} {v_{d} \cdot \frac{{A(\alpha ,0,d)}}{{G(\alpha ,1,d)}}\quad \quad \quad \quad \quad \quad \left( {0 < \alpha < k - 1} \right)} } \hfill \\ {D(\alpha ,0,0) - G(\alpha ,0,0) + M(\alpha ,0,0)} \hfill \\ { - \sum\limits_{{d = 1}}^{r} {v_{d} \cdot \frac{{A(\alpha ,0,d)}}{{G(\alpha ,1,d)}}\quad \quad \quad \quad \quad \quad \left( {\alpha = k - 1} \right)} } \hfill \\ \end{array} } \right. $$
(11.C14)

and

$$ P^{\prime}\left[ {\alpha ,v_{1} , \cdots ,v_{r} ,w} \right] = \left\{ {\begin{array}{*{20}l} { - G(\alpha ,0,0) \cdot \left[ {w \cdot \frac{{M(\alpha ,0,0)}}{{G(\alpha + 1,0,0)}} + \sum\limits_{{d = 1}}^{r} {v_{d} \cdot \frac{{A(\alpha ,0,d)}}{{G(\alpha ,1,d)}}} } \right]\;\;\;\; (0 < \alpha < k - 1)} \hfill \\ {G(\alpha ,0,0) \cdot \left[ {M(\alpha ,0,0) - \sum\limits_{{d = 1}}^{r} {v_{d} \cdot \frac{{A(\alpha ,0,d)}}{{G(\alpha ,1,d)}}} } \right]\quad \;\;\;\;\; (\alpha = k - 1)} \hfill \\ \end{array} } \right. $$
(11.C15)

Likewise, with the same trick, (11.C10) can be rewritten as

$$ \psi (t) = \exp \left\{ {\int\limits_{0}^{t} {z_{1,0,0} (s,t) \cdot \frac{X \cdot M(0,0,0)}{G(1,0,0)} + \sum\limits_{d = 1}^{r} {z_{0,1,d} (s,t) \cdot \frac{X \cdot A(0,0,d)}{G(0,1,d)}\;} ds} } \right\} $$
(11.C16)

Appendix D

4.1 Derivation of the Kolmogorov Forward Differential Equation for the Cancer Model of Little et al. [9]

In this Appendix we derive the Kolmogorov forward differential equation defining the generating function of the cancer model of Little et al. [9]. The generating function \( \psi \) is given by

$$ \begin{array}{*{20}l} {\psi (y_{1,0,0} ,y_{2,0,0} , \ldots ,y_{k - 1,0,0} ,y_{0,1,1} , \ldots ,y_{k - 1,1,1} ,y_{0,2,1} ,y_{1,2,1} , \ldots ,y_{{k - 1,m_{r} ,r}} ,y_{k} ;t,s)} \hfill \\ { \equiv \psi (t,s)} \hfill \\ { = \sum\limits_{n} {y_{1,0,0}^{{n_{1,0,0} }} \cdot \ldots \cdot y_{k - 1,0,0}^{{n_{k - 1,0,0} }} \cdot y_{0,1,1}^{{n_{0,1,1} }} \cdot \ldots \cdot y_{k - 1,1,1}^{{n_{k - 1,1,1} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}}^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k}^{{n_{k} }} \times } } \hfill \\ {\quad \quad P[Y_{1,0,0} (t) = n_{1,0,0} , \ldots ,Y_{k} (t) = n_{k} \left| {N(s) = X(0),Y_{1,0,0} (s) = \ldots = Y_{k} (s) = 0} \right.]} \hfill \\ \end{array} $$
(11.D1)

By differentiating term by term (justified by the absolute convergence of the derivative power series) \( \psi \) satisfies:

$$ \begin{array}{*{20}l} {\frac{{\partial \psi }}{{\partial t}}[t,s] = \sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot \frac{{dP}}{{dt}}[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{k - 1,m_{r} ,r}} (t) = n_{{k - 1,m_{r} ,r}} ,Y_{k} (t) = n_{k} ]} } \hfill \\ { = \sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot X(t) \cdot M(0,0,0)(t) \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} - 1, \ldots ,Y_{{k - 1,m}} (t) = n_{{k - 1,m_{r} ,r}} ,Y_{k} (t) = n_{k} ]} + } \hfill \\ {\sum\limits_{{d = 1}}^{r} {\sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot X(t) \cdot A(0,0,d)(t) \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{0,1,d}} (t) = n_{{0,1,d}} - 1, \ldots ,Y_{{k - 1,m_{r} ,r}} (t) = n_{{k - 1,m_{r} ,r}} ,Y_{k} (t) = n_{k} ]} - } } \hfill \\ {\sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot X(t) \cdot M(0,0,0)(t) \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{k - 1,m_{r} ,r}} (t) = n_{{k - 1,m_{r} ,r}} ,Y_{k} (t) = n_{k} ]} - } \hfill \\ {\sum\limits_{{d = 1}}^{r} {\sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot X(t) \cdot A(0,0,d)(t) \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{k - 1,m_{r} ,r}} (t) = n_{{k - 1,m_{r} ,r}} ,Y_{k} (t) = n_{k} ]} + } } \hfill \\ {\sum\limits_{{1 \le \alpha \le k - 1}} {\sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot \left[ {\begin{array}{*{20}l} {G(\alpha ,0,0)(t) \cdot [n_{{\alpha ,0,0}} - 1] \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,0,0}} (t) = n_{{\alpha ,0,0}} - 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {D(\alpha ,0,0)(t) \cdot [n_{{\alpha ,0,0}} + 1] \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,0,0}} (t) = n_{{\alpha ,0,0}} + 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {M(\alpha ,0,0)(t) \cdot n_{{\alpha ,0,0}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,0,0}} (t) = n_{{\alpha ,0,0}} ,Y_{{\alpha + 1,0,0}} (t) = n_{{\alpha + 1,0,0}} - 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {\sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t) \cdot n_{{\alpha ,0,0}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,0,0}} (t) = n_{{\alpha ,0,0}} , \ldots ,Y_{{\alpha ,1,d'}} (t) = n_{{\alpha ,1,d'}} - 1, \ldots ,Y_{k} (t) = n_{k} ]} - } \hfill \\ {n_{{\alpha ,0,0}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{k} (t) = n_{k} ] \cdot \left[ {G(\alpha ,0,0)(t) + D(\alpha ,0,0)(t) + M(\alpha ,0,0)(t) + \sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t)} } \right]} \hfill \\ \end{array} } \right] + } } } \hfill \\ {\sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\sum\limits_{{n_{{1,0,0}} ,n_{{2,0,0}} , \ldots ,n_{k} }} {y_{{1,0,0}} ^{{n_{{1,0,0}} }} \cdot \ldots \cdot y_{{k - 1,m_{r} ,r}} ^{{n_{{k - 1,m_{r} ,r}} }} \cdot y_{k} ^{{n_{k} }} \cdot \left[ {\begin{array}{*{20}l} {G(\alpha ,\beta ,d)(t) \cdot [n_{{\alpha ,\beta ,d}} - 1] \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,\beta ,d}} (t) = n_{{\alpha ,\beta ,d}} - 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {D(\alpha ,\beta ,d)(t) \cdot [n_{{\alpha ,\beta ,d}} + 1] \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,\beta ,d}} (t) = n_{{\alpha ,\beta ,d}} + 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {M(\alpha ,\beta ,d)(t) \cdot n_{{\alpha ,\beta ,d}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,\beta ,d}} (t) = n_{{\alpha ,\beta ,d}} ,Y_{{\alpha + 1,\beta ,d}} (t) = n_{{\alpha + 1,\beta ,d}} - 1, \ldots ,Y_{k} (t) = n_{k} ] + } \hfill \\ {A(\alpha ,\beta ,d)(t) \cdot n_{{\alpha ,0,0}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{{\alpha ,\beta ,d}} (t) = n_{{\alpha ,\beta ,d}} , \ldots ,Y_{{\alpha ,\beta + 1,d}} (t) = n_{{\alpha ,\beta + 1,d}} - 1, \ldots ,Y_{k} (t) = n_{k} ] - } \hfill \\ {n_{{\alpha ,\beta ,d}} \cdot P[Y_{{1,0,0}} (t) = n_{{1,0,0}} , \ldots ,Y_{k} (t) = n_{k} ] \cdot \left[ {G(\alpha ,\beta ,d)(t) + D(\alpha ,\beta ,d)(t) + M(\alpha ,\beta ,d)(t) + A(\alpha ,\beta ,d)(t)} \right]} \hfill \\ \end{array} } \right]} } } \hfill \\ { = \psi \cdot [y_{{1,0,0}} - 1] \cdot X(t) \cdot M(0,0,0)(t) + \psi \cdot \sum\limits_{{d = 1}}^{r} {[y_{{0,1,d}} - 1] \cdot X(t) \cdot A(0,0,d)(t)} + } \hfill \\ {\sum\limits_{{1 \le \alpha \le k - 1}} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,0,0}} \cdot [D(\alpha ,0,0)(t) + G(\alpha ,0,0)(t) + M(\alpha ,0,0)(t)} \hfill \\ { + \sum\limits_{{d' = 1}}^{r} {A(\alpha ,0,d')(t)} ] + y_{{\alpha ,0,0}} ^{2} \cdot G(\alpha ,0,0)(t) + D(\alpha ,0,0)(t)} \hfill \\ { + y_{{\alpha ,0,0}} \cdot y_{{\alpha + 1,0,0}} \cdot M(\alpha ,0,0)(t) + \sum\limits_{{d' = 1}}^{r} {y_{{\alpha ,0,0}} \cdot y_{{\alpha ,1,d'}} \cdot A(\alpha ,0,d')(t)} } \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }}{{\partial y_{{\alpha ,0,0}} }}} } \;+ \hfill \\ {\sum\limits_{{\begin{array}{*{20}l} {0 \le \alpha \le k - 1} \hfill \\ {1 \le \beta \le m_{d} } \hfill \\ {1 \le d \le r} \hfill \\ \end{array} }} {\left[ {\begin{array}{*{20}l} { - y_{{\alpha ,\beta ,d}} \cdot [D(\alpha ,\beta ,d)(t) + G(\alpha ,\beta ,d)(t) + M(\alpha ,\beta ,d)(t)} \hfill \\ { + A(\alpha ,\beta ,d)(t)] + y_{{\alpha ,\beta ,d}} ^{2} \cdot G(\alpha ,\beta ,d)(t) + D(\alpha ,\beta ,d)(t)} \hfill \\ { + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha + 1,\beta ,d}} \cdot M(\alpha ,\beta ,d)(t) + y_{{\alpha ,\beta ,d}} \cdot y_{{\alpha ,\beta + 1,d}} \cdot A(\alpha ,\beta ,d)(t)} \hfill \\ \end{array} } \right] \cdot \frac{{\partial \psi }} {{\partial y_{{\alpha ,\beta ,d}} }}} } \hfill \\ \end{array} $$
(11.D2)

Appendix E

5.1 Derivation of System of Differential Equations Defining the Hessian of the Hazard Function for the Cancer Model of Little and Wright [8] and Little et al. [9]

In this Section we derive the set of differential equations defining the Hessian (with respect to the model parameters) for the cancer model of Little and Wright [8] and Little et al. [9] in the case when all model parameters are constant. For simplicity we present only the derivation for the simpler model of Little and Wright [8]; the derivation for the more complex model of Little et al. [9] is straightforward but lengthy. This allows us to drop the final identifying label in each of \( G(\alpha ,\beta ,d),D(\alpha ,\beta ,d),M(\alpha ,\beta ,d),A(\alpha ,\beta ,d) \), which we will henceforth write as \( G(\alpha ,\beta ),D(\alpha ,\beta ),M(\alpha ,\beta ),A(\alpha ,\beta ) \), respectively. The hazard function of the cancer model with \( k \) cancer-stage mutations and \( m \) destabilizing mutations developed by Little and Wright [8] may be written as:

$$ h(t) = - \int\limits_{0}^{t} {\left\{ {\frac{{\partial \phi_{1,0} [t,s]}}{\partial t} \cdot M(0,0) + \frac{{\partial \phi_{0,1} [t,s]}}{\partial t} \cdot A(0,0)} \right\} \cdot Xds} $$
(11.E1)

where the PGFs \( \phi _{i,j} \) also satisfy the following Kolmogorov backward equations (for \( 0 \le i \le k - 1 \), \( 0 \le j \le m \), \( (i,j) \ne (0,0) \)):

$$ \begin{array}{*{20}l} {\frac{{\partial \phi _{i,j} }}{\partial s}[t,s] = [D(i,j) + G(i,j) + M(i,j) + A(i,j)] \cdot \phi _{i,j} [t,s]} \hfill \\\qquad\quad\;\;{ - G(i,j) \cdot \phi _{i,j} [t,s]^{2} - M(i,j) \cdot \phi _{i,j} [t,s] \cdot \phi _{i + 1,j} [t,s]} \hfill \\ \qquad\quad\;\;{ - A(i,j) \cdot \phi _{i,j} [t,s] \cdot \phi _{i,j + 1} [t,s] - D(i,j)} \hfill \\ \end{array} $$
(11.E2)

Differentiating (11.E1) gives:

$$ \begin{array}{*{20}l} {\frac{\partial h(t)}{\partial X}= - \int\limits_{0}^{t} {\left\{ {\frac{{\partial \phi_{1,0} [t,s]}}{\partial t} \cdot M(0,0) + \frac{{\partial \phi_{0,1} [t,s]}}{\partial t} \cdot A(0,0)} \right\}ds} } \hfill \\ \quad\;\;{ =[1 - \phi_{1,0} [t,0]] \cdot M(0,0) + [1 - \phi_{0,1} [t,0]] \cdot A(0,0)} \hfill \\ \end{array} $$
(11.E3)
$$ \frac{\partial h(t)}{\partial M(0,0)} = - \int\limits_{0}^{t} {\frac{{\partial \phi_{1,0} [t,s]}}{\partial t} \cdot Xds} = [1 - \phi_{1,0} [t,0]] \cdot X $$
(11.E4)
$$ \frac{\partial h(t)}{\partial A(0,0)} = - \int\limits_{0}^{t} {\frac{{\partial \phi_{0,1} [t,s]}}{\partial t} \cdot Xds} = [1 - \phi_{0,1} [t,0]] \cdot X $$
(11.E5)

and for all other model parameters, \( \beta_{k} \):

$$ \begin{array}{*{20}l} {\frac{\partial h(t)}{{\partial \beta_{k} }} = - \int\limits_{0}^{t} {\left\{ {\frac{{\partial^{2} \phi_{1,0} [t,s]}}{{\partial \beta_{k} \partial t}} \cdot M(0,0) + \frac{{\partial^{2} \phi_{0,1} [t,s]}}{{\partial \beta_{k} \partial t}} \cdot A(0,0)} \right\}Xds} } \hfill \\ \quad\;\; { = - \frac{{\partial \phi_{1,0} [t,0]}}{{\partial \beta_{k} }} \cdot M(0,0) - \frac{{\partial \phi_{0,1} [t,0]}}{{\partial \beta_{k} }} \cdot A(0,0)} \hfill \\ \end{array} $$
(11.E6)

Likewise, we can evaluate the second derivatives by differentiating (11.E311.E5) further:

$$ \frac{{\partial^{2} h(t)}}{\partial X\partial M(0,0)} = 1 - \phi_{1,0} [t,0] $$
(11.E7)
$$ \frac{{\partial^{2} h(t)}}{\partial X\partial A(0,0)} = 1 - \phi_{0,1} [t,0] $$
(11.E8)
$$ \frac{{\partial^{2} h(t)}}{{\partial^{2} M(0,0)}} = \frac{{\partial^{2} h(t)}}{{\partial^{2} A(0,0)}} = \frac{{\partial^{2} h(t)}}{\partial A(0,0)\partial M(0,0)} = 0 $$
(11.E9)

and for all model parameters, \( \beta_{k} ,\beta_{l} \notin \left\{ {X,M(0,0),A(0,0)} \right\} \):

$$ \frac{{\partial^{2} h(t)}}{{\partial \beta_{k} \partial X}} = - \frac{{\partial \phi_{1,0} [t,0]}}{{\partial \beta_{k} }} \cdot M(0,0) - \frac{{\partial \phi_{0,1} [t,0]}}{{\partial \beta_{k} }} \cdot A(0,0) $$
(11.E10)
$$ \frac{{\partial^{2} h(t)}}{{\partial \beta_{k} \partial M(0,0)}} = - \frac{{\partial \phi_{1,0} [t,0]}}{{\partial \beta_{k} }} \cdot X $$
(11.E11)
$$ \frac{{\partial^{2} h(t)}}{{\partial \beta_{k} \partial A(0,0)}} = - \frac{{\partial \phi_{0,1} [t,0]}}{{\partial \beta_{k} }} \cdot X $$
(11.E12)
$$ \frac{{\partial^{2} h(t)}}{{\partial \beta_{k} \partial \beta_{l} }} = - \frac{{\partial^{2} \phi_{1,0} [t,0]}}{{\partial \beta_{k} \partial \beta_{l} }} \cdot M(0,0) - \frac{{\partial^{2} \phi_{0,1} [t,0]}}{{\partial \beta_{k} \partial \beta_{l} }} \cdot A(0,0) $$
(11.E13)

We can evaluate \( \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} \) by differentiating (11.E2), for \( \beta_{k} \notin \left\{ {D(i,j),G(i,j),M(i,j),A(i,j)} \right\} \):

$$ \begin{array}{*{20}l} {\frac{{\partial^{2} \phi_{i,j} }}{{\partial \beta_{k} \partial s}}[t,s] = [D(i,j) + G(i,j) + M(i,j) + A(i,j)] \cdot \frac{{\partial \phi_{i,j} }}{{\partial \beta_{k} }}[t,s]} \hfill \\ \quad \quad \quad\;{ - 2 \cdot G(i,j) \cdot \phi_{i,j} [t,s] \cdot \frac{{\partial \phi_{i,j} }}{{\partial \beta_{k} }}[t,s] - M(i,j) \cdot \left[ {\frac{{\partial \phi_{i,j} }}{{\partial \beta_{k} }}[t,s] \cdot \phi_{i + 1,j} [t,s] + \frac{{\partial \phi_{i + 1,j} }}{{\partial \beta_{k} }}[t,s] \cdot \phi_{i,j} [t,s]} \right]} \hfill \\ \quad\quad\quad\;{ - A(i,j) \cdot \left[ {\frac{{\partial \phi_{i,j} }}{{\partial \beta_{k} }}[t,s] \cdot \phi_{i,j + 1} [t,s] + \frac{{\partial \phi_{i,j + 1} }}{{\partial \beta_{k} }}[t,s] \cdot \phi_{i,j} [t,s]} \right]} \hfill \\ \quad\qquad\; { = \Upomega_{i,j} \left[ {\beta_{k} ,t,s} \right]} \hfill \\ \end{array} $$
(11.E14)

with appropriate initial conditions (discussed later). For \( \beta_{k} \in \left\{ {D(i,j),G(i,j),M(i,j),A(i,j)} \right\} \) we have:

$$ \frac{{\partial ^{2} \phi _{{i,j}} }}{{\partial \beta _{k} \partial s}}[t,s] = \left\{ {\begin{array}{*{20}l} {\Omega _{{i,j}} \left[ {\beta _{k} ,t,s} \right] + \phi _{{i,j}} [t,s] - 1} \hfill & {\beta _{k} = D(i,j)} \hfill \\ {\Omega _{{i,j}} \left[ {\beta _{k} ,t,s} \right] + \phi _{{i,j}} [t,s] - \phi _{{i,j}} [t,s]^{2} } \hfill & {\beta _{k} = G(i,j)} \hfill \\ {\Omega _{{i,j}} \left[ {\beta _{k} ,t,s} \right] + \phi _{{i,j}} [t,s] - \phi _{{i,j}} [t,s]\phi _{{i + 1,j}} [t,s]} \hfill & {\beta _{k} = M(i,j)} \hfill \\ {\Omega _{{i,j}} \left[ {\beta _{k} ,t,s} \right] + \phi _{{i,j}} [t,s] - \phi _{{i,j}} [t,s]\phi _{{i,j + 1}} [t,s]} \hfill & {\beta _{k} = A(i,j){\text{ }}} \hfill \\ \end{array} } \right. $$
(11.E15)

Likewise, we can evaluate \( \frac{{\partial^{2} h(t)}}{{\partial \beta_{k} \partial \beta_{l} }} \) by differentiating (11.E14), for \( \beta_{k} ,\beta_{l} \notin \left\{ {D(i,j),G(i,j),M(i,j),A(i,j)} \right\} \):

$$ \begin{array}{*{20}l} {\frac{{\partial ^{3} \phi _{{i,j}} }}{{\partial \beta _{k} \partial \beta _{l} \partial s}}[t,s] = [D(i,j) + G(i,j) + M(i,j) + A(i,j)] \cdot \frac{{\partial ^{2} \phi _{{i,j}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s]} \hfill \\ \quad\quad\quad \quad\;{ - 2 \cdot G(i,j) \cdot \left[ {\frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{k} }}[t,s] \cdot \frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{l} }}[t,s] + \phi _{{i,j}} [t,s] \cdot \frac{{\partial ^{2} \phi _{{i,j}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s]} \right]} \hfill \\ \quad\quad\quad\quad\; { - M(i,j) \cdot \left[ {\begin{array}{*{20}l} {\frac{{\partial ^{2} \phi _{{i,j}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s] \cdot \phi _{{i + 1,j}} [t,s] + \frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{k} }}[t,s] \cdot \frac{{\partial \phi _{{i + 1,j}} }}{{\partial \beta _{l} }}[t,s] + \frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{l} }}[t,s] \cdot \frac{{\partial \phi _{{i + 1,j}} }}{{\partial \beta _{k} }}[t,s] + } \hfill \\ {\frac{{\partial ^{2} \phi _{{i + 1,j}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s] \cdot \phi _{{i,j}} [t,s]} \hfill \\ \end{array} } \right]} \hfill \\ \quad\quad\quad\quad\; { - A(i,j) \cdot \left[ {\begin{array}{*{20}l} {\frac{{\partial ^{2} \phi _{{i,j}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s] \cdot \phi _{{i,j + 1}} [t,s] + \frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{k} }}[t,s] \cdot \frac{{\partial \phi _{{i,j + 1}} }}{{\partial \beta _{l} }}[t,s] + \frac{{\partial \phi _{{i,j}} }}{{\partial \beta _{l} }}[t,s] \cdot \frac{{\partial \phi _{{i,j + 1}} }}{{\partial \beta _{k} }}[t,s] + } \hfill \\ {\frac{{\partial ^{2} \phi _{{i,j + 1}} }}{{\partial \beta _{k} \partial \beta _{l} }}[t,s] \cdot \phi _{{i,j}} [t,s]} \hfill \\ \end{array} } \right]} \hfill \\ \quad\quad\quad\;\; { = \Uppsi_{{i,j}} \left[ {\beta _{k} ,\beta _{l} ,t,s} \right]} \hfill \\ \end{array} $$
(11.E16)

For \( \beta_{k} \notin \left\{ {D(i,j),G(i,j),M(i,j),A(i,j)} \right\} \) we have:

$$ \frac{{\partial^{3} \phi_{i,j} }}{{\partial \beta_{k} \partial \beta_{l} \partial s}}[t,s] = \left\{ {\begin{array}{*{20}l} {\Uppsi_{i,j} \left[ {\beta_{k} ,\beta_{l} ,t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }}\quad \quad \quad \quad \quad \quad \beta_{l} = D(i,j)} \hfill \\ {\Uppsi_{i,j} \left[ {\beta_{k} ,\beta_{l} ,t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }}\quad \quad \quad \quad \quad \quad \beta_{l} = G(i,j)} \hfill \\ {\quad \quad - 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} \cdot \phi_{i,j} [t,s]} \hfill \\ {\Uppsi_{i,j} \left[ {\beta_{k} ,\beta_{l} ,t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} - \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} \cdot \phi_{i + 1,j} [t,s]\; \beta_{l} = M(i,j)} \hfill \\ {\quad \quad - \frac{{\partial \phi_{i + 1,j} [t,s]}}{{\partial \beta_{k} }} \cdot \phi_{i,j} [t,s]} \hfill \\ {\Uppsi_{i,j} \left[ {\beta_{k} ,\beta_{l} ,t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} - \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} \cdot \phi_{i,j + 1} [t,s]\; \beta_{l} = A(i,j)} \hfill \\ {\quad \quad - \frac{{\partial \phi_{i,j + 1} [t,s]}}{{\partial \beta_{k} }} \cdot \phi_{i,j} [t,s]\;} \hfill \\ \end{array} } \right. $$
(11.E17)

Finally, we have that:

$$ \frac{{\partial^{3} \phi_{i,j} }}{{\partial D(i,j)^{2} \partial s}}[t,s] = \Uppsi_{i,j} \left[ {D(i,j),D(i,j),t,s} \right] + 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} $$
(11.E18)
$$ \frac{{\partial^{3} \phi_{i,j} }}{\partial D(i,j)\partial G(i,j)\partial s}[t,s] = \Uppsi_{i,j} \left[ {D(i,j),G(i,j),t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} + \frac{{\partial \phi_{i,j} [t,s]}}{\partial G(i,j)} - 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} \cdot \phi_{i,j} [t,s] $$
(11.E19)
$$ \frac{{\partial^{3} \phi_{i,j} }}{\partial D(i,j)\partial M(i,j)\partial s}[t,s] = \Uppsi_{i,j} \left[ {D(i,j),M(i,j),t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} + \frac{{\partial \phi_{i,j} [t,s]}}{\partial M(i,j)} - \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} \cdot \phi_{i + 1,j} [t,s] $$
(11.E20)
$$ \frac{{\partial^{3} \phi_{i,j} }}{\partial D(i,j)\partial A(i,j)\partial s}[t,s] = \Uppsi_{i,j} \left[ {D(i,j),A(i,j),t,s} \right] + \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} + \frac{{\partial \phi_{i,j} [t,s]}}{\partial A(i,j)} - \frac{{\partial \phi_{i,j} [t,s]}}{\partial D(i,j)} \cdot \phi_{i,j + 1} [t,s] $$
(11.E21)
$$ \frac{{\partial^{3} \phi_{i,j} }}{{\partial G(i,j)^{2} \partial s}}[t,s] = \Uppsi_{i,j} \left[ {G(i,j),G(i,j),t,s} \right] + 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial G(i,j)} - 4 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial G(i,j)} \cdot \phi_{i,j} [t,s] $$
(11.E22)
$$ \begin{array}{*{20}l} {\frac{{\partial ^{3} \phi _{{i,j}} }}{{\partial G(i,j)\partial M(i,j)\partial s}}[t,s] = \Uppsi_{{i,j}} \left[ {G(i,j),M(i,j),t,s} \right] + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial G(i,j)}} + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial M(i,j)}} - 2 \cdot \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial M(i,j)}} \cdot \phi _{{i,j}} [t,s]} \hfill \\ { - \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial G(i,j)}} \cdot \phi _{{i + 1,j}} [t,s]} \hfill \\ \end{array} $$
(11.E23)
$$ \begin{array}{*{20}l} {\frac{{\partial ^{3} \phi _{{i,j}} }}{{\partial G(i,j)\partial A(i,j)\partial s}}[t,s] = \Uppsi _{{i,j}} \left[ {G(i,j),A(i,j),t,s} \right] + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial G(i,j)}} + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial A(i,j)}} - 2 \cdot \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial A(i,j)}} \cdot \phi _{{i,j}} [t,s]} \hfill \\ { - \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial G(i,j)}} \cdot \phi _{{i,j + 1}} [t,s]} \hfill \\ \end{array} $$
(11.E24)
$$ \frac{{\partial^{3} \phi_{i,j} }}{{\partial M(i,j)^{2} \partial s}}[t,s] = \Uppsi_{i,j} \left[ {M(i,j),M(i,j),t,s} \right] + 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial M(i,j)} - 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial M(i,j)} \cdot \phi_{i + 1,j} [t,s] $$
(11.E25)
$$ \begin{array}{*{20}l} {\frac{{\partial ^{3} \phi _{{i,j}} }}{{\partial M(i,j)\partial A(i,j)\partial s}}[t,s] = \Uppsi _{{i,j}} \left[ {M(i,j),A(i,j),t,s} \right] + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial M(i,j)}} + \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial A(i,j)}} - \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial M(i,j)}} \cdot \phi _{{i,j + 1}} [t,s]} \hfill \\ { - \frac{{\partial \phi _{{i,j}} [t,s]}}{{\partial A(i,j)}} \cdot \phi _{{i + 1,j}} [t,s]} \hfill \\ \end{array} $$
(11.E26)
$$ \frac{{\partial^{3} \phi_{i,j} }}{{\partial A(i,j)^{2} \partial s}}[t,s] = \Uppsi_{i,j} \left[ {A(i,j),A(i,j),t,s} \right] + 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial A(i,j)} - 2 \cdot \frac{{\partial \phi_{i,j} [t,s]}}{\partial A(i,j)} \cdot \phi_{i,j + 1} [t,s] $$
(11.E27)

As in Little and Wright [8], the following boundary conditions must be satisfied, for all \( i,j,\beta_{k} ,\beta_{l} \):

$$ \phi_{i,j} [t,t] = 1\;\quad 0 \le i \le k - 1 $$
(11.E28)
$$ \frac{{\partial \phi_{i,j} [t,t]}}{{\partial \beta_{k} }} = \frac{{\partial^{2} \phi_{i,j} [t,t]}}{{\partial \beta_{k} \partial \beta_{l} }} = 0 $$
(11.E29)

This system of ordinary differential equations (in the variable \( s \)) for \( \phi_{i,j} [t,s] \), \( \frac{{\partial \phi_{i,j} [t,s]}}{{\partial \beta_{k} }} \), \( \frac{{\partial^{2} \phi_{i,j} [t,s]}}{{\partial \beta_{k} \partial \beta_{l} }} \) were integrated using the Boerlisch-Stoer algorithm with adaptive stepsize control [24]. Very similar results were obtained using a Runge-Kutta integrator with adaptive stepsize control [24].

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Little, M.P., Heidenreich, W.F., Li, G. (2013). Parameter Identifiability and Redundancy, with Applications to a General Class of Stochastic Carcinogenesis Models. In: Prokop, A., Csukás, B. (eds) Systems Biology. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-6803-1_11

Download citation

Publish with us

Policies and ethics