Skip to main content
Log in

Statistical analysis of one-compartment pharmacokinetic models with drug adherence

  • Original Paper
  • Published:
Journal of Pharmacokinetics and Pharmacodynamics Aims and scope Submit manuscript

Abstract

Pharmacokinetics is a scientific branch of pharmacology that describes the time course of drug concentration within a living organism and helps the scientific decision-making of potential drug candidates. However, the classical pharmacokinetic models with the eliminations of zero-order, first-order and saturated Michaelis–Menten processes, assume that patients perfectly follow drug regimens during drug treatment, and the significant factor of patients’ drug adherence is not taken into account. In this study, therefore, considering the random change of dosage at the fixed dosing time interval, we reformulate the classical deterministic one-compartment pharmacokinetic models to the framework of stochastic, and analyze their qualitative properties including the expectation and variance of the drug concentration, existence of limit drug distribution, and the stochastic properties such as transience and recurrence. In addition, we carry out sensitivity analysis of drug adherence-related parameters to the key values like expectation and variance, especially for the impact on the lowest and highest steady state drug concentrations (i.e. the therapeutic window). Our findings can provide an important theoretical guidance for the variability of drug concentration and help the optimal design of medication regimens. Moreover, The developed models in this paper can support for the potential study of the impact of drug adherence on long-term treatment for chronic diseases like HIV, by integrating disease models and the stochastic PK models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Jimmy B, Jose J (2011) Patient medication adherence: measures in daily practice. Oman Med J 26(3):155–159

    Article  Google Scholar 

  2. Brown M, Bussell J (2011) Medication adherence: WHO cares? Mayo Clin Proc 86(4):304–314

    Article  Google Scholar 

  3. Lucca J, Ramesh M, Parthasarathi G, Ram D (2015) Incidence and factors associated with medication nonadherence in patients with mental illness: a cross-sectional study. J Postgrad Med 61(4):251–256

    Article  CAS  Google Scholar 

  4. Haynes R, McDonald H, Garg A (2002) Helping patients follow prescribed treatment: clinical applications. J Am Med Assoc 288(22):2880–2883

    Article  Google Scholar 

  5. Col N, Fanale J, Kronholm P (1990) The role of medication noncompliance and adverse drug reactions in hospitalizations of the elderly. J Gen Intern Med 150(4):841–845

    CAS  Google Scholar 

  6. Kenna L, Labbé L, Barrett J, Pfister M (2005) Modeling and simulation of adherence: approaches and applications in therapeutics. AAPS PharmSciTech 7(2):390–407

    Article  Google Scholar 

  7. Dyson L, Stolk W, Farrell S, Hollingsworth T (2017) Measuring and modelling the effects of systematic non-adherence to mass drug administration. Epidemics 18:56–66

    Article  Google Scholar 

  8. Slejko J, Sullivan P, Anderson H et al (2014) Dynamic medication adherence modeling in primary prevention of cardiovascular disease: a Markov microsimulation methods application. Value Health 17(6):725–731

    Article  Google Scholar 

  9. Wu X, Yang H, Yuan R et al (2020) Predictive models of medication non-adherence risks of patients with T2D based on multiple machine learning algorithms. BMJ Open Diabetes Res Care 8:e001055

    Article  Google Scholar 

  10. Yu Y, Luo D, Chen X et al (2018) Medication adherence to antiretroviral therapy among newly treated people living with HIV. BMC Public Health 18(1):825

    Article  Google Scholar 

  11. Du X, Chen H, Zhuang Y, Zhao Q, Shen B (2020) Medication adherence in Chinese patients with systemic lupus erythematosus. J Clin Rheumatol 26(3):94–98

    Article  Google Scholar 

  12. Gibaldi M, Perrier D (2007) Pharmacokinetics, 2nd edn. Informa Healthcare, New York

    Google Scholar 

  13. Mehdi B (2015) Pharmacokinetics and toxicokinetics. CRC Press, Boca Raton

    Google Scholar 

  14. Ramanathan M (1999) An application of Ito’s lemma in population pharmacokinetics and pharmacodynamics. Pharm Res 16(4):584–586

    Article  CAS  Google Scholar 

  15. Ferrante L, Bompadre S, Leone L (2003) A stochastic compartmental model with long lasting infusion. Biom J 45:182–194

    Article  Google Scholar 

  16. Lévy Véhel P, Lévy Véhel J (2013) Variability and singularity arising from poor compliance in a pharmacokinetic model I: the multi-IV case. J Pharmacokinet Pharmacodyn 40(1):15–39

    Article  Google Scholar 

  17. Fermín L, Lévy-Vehel J (2017) Variability and singularity arising from poor compliance in a pharmacokinetic model II: the multi-oral case. J Math Biol 74(4):809–841

    Article  Google Scholar 

  18. Saqlain M, Alam M, Ronnegørd L et al (2020) Investigating stochastic differential equations modelling for Levodopa infusion in patients with Parkinson’s disease. Eur J Drug Metab Pharmacokinet 45:41–49

    Article  CAS  Google Scholar 

  19. Tu W, Nyandiko W, Liu H et al (2017) Pharmacokinetics-based adherence measures for antiretroviral therapy in HIV-infected Kenyan children. J Int AIDS Soc 20(1):21157

    Article  Google Scholar 

  20. Donnet S, Samson A (2013) A review on estimation of stochastic differential equations for pharmacokinetic/pharmacodynamic models. Adv Drug Deliv Rev 65(7):929–939

    Article  CAS  Google Scholar 

  21. Robert J, Roy M et al (2020) Dynamics of individual adherence to mass drug administration in a conditional probability model. MedRxiv, Preprint

  22. Tang S, Xiao Y, Liang J, Wang X (2019) Mathematical biology. Science Press, Beijing

    Google Scholar 

  23. Liu C, Liu D (1984) Introduction to pharmacokinetics. China Academic Press, Beijing

    Google Scholar 

  24. Mao S, Cheng Y (2011) Probability theory and mathematical statistics. Higher Education Press, Beijing

    Google Scholar 

  25. Zhang B, Shang H (2014) Applied stochastic processes, 3rd edn. China Renmin University Press, Beijing

    Google Scholar 

  26. Tang S, Xiao Y (2007) One-compartment model with Michaelis-Menten elimination kinetics and therapeutic window: an analytical approach. J Pharmacokinet Pharmacodyn 34(6):807–827

    Article  CAS  Google Scholar 

  27. Marione S, Hogue I et al (2008) A methodology for performing global uncertainty and sensitivity analysis in systems biology. J Theoret Biol 254(1):178–196

    Article  Google Scholar 

  28. Tang S, Liang J, Tan Y, Robert A (2011) Threshold conditions for integrated pest management models with pesticides that have residual effects. J Math Biol 66:1–35

    Article  Google Scholar 

  29. Nel A, Niekerk N, Baelen B et al (2021) Safety, adherence, and HIV-1 seroconversion among women using the dapivirine vaginal ring (DREAM): an open-label, extension study. Lancet HIV 8(2):e77–e86

    Article  Google Scholar 

  30. Wu X, Li J, Nekka F (2015) Closed form solutions and dominant elimination pathways of simultaneous first-order and Michaelis-Menten kinetics. J Pharmacokinet Pharmacodyn 42(2):151–161

    Article  Google Scholar 

  31. Wu X, Nekka F, Li J (2019) Analytical solution and exposure analysis of a pharmacokinetic model with simultaneous elimination pathways and endogenous production: the case of multiple dosing administration. Bull Math Biol 81(12):3436–3459

    Article  CAS  Google Scholar 

  32. Buraczewski D, Damer E, Mikosch T (2016) Stochastic models with power-law tails the equation \(X=AX+B\). Springer, New York

    Book  Google Scholar 

  33. Henrik F, Göran H (2002) Stability classification of a Ricker model with two random parameters. Adv Appl Probab 34(1):112–127

    Article  Google Scholar 

  34. Sean M, Richard T (1985) Markov chain and stochastic stability. Springer, New York

    Google Scholar 

  35. Foster F (1953) On the stochastic matrices associated with certain queueing processes. Ann Stat 24(3):355–360

    Article  Google Scholar 

Download references

Acknowledgements

This work is support by National Natural Science Foundation of China (12031010, 61772017), and the Fundamental Research Funds for the Central Universities (GK202007039, GK202007038). The authors thank the anonymous referees for their careful reading and many valuable comments of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaotian Wu or Sanyi Tang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Fundamental knowledge of stochastic models

In this Appendix, some fundamental stochastic knowledge including stability classifications and mean drift criteria are provided to better understand the outcomes [32,33,34,35].

(a) Stability classifications

Let \(\{X_n\}\) denote a Markov chain on \({{\mathbb {R}}}\), then the n-step transition probability of a chain at \(x\in \mathbb {R}\) is

$$\begin{aligned} P^n(x, A) =P(X_n \in A| X_0=x) {\mathop {=}\limits ^{\bigtriangleup }}P_x(X_n \in A). \end{aligned}$$

where the appropriate set \(A \in {{\mathscr {B}}}({{\mathbb {R}}})\), the Borel \(\sigma \)-field of \({{\mathbb {R}}}\), and \(n \ge 1\).

A Markov chain \(\{X_n\}\) is said to be \(\nu \)-irreducible for some measure \(\nu \) on \({{\mathscr {B}}}({{\mathbb {R}}})\), if every set with positive \(\nu \)-measure can be reached from any starting point \(X_0\). Moreover, an irreducible Markov chain \(\{X_n\}\) is called recurrent if

$$\begin{aligned} \sum _{n=1}^\infty P^n(x,A)=\infty , \end{aligned}$$

for every \(x \in {{\mathbb {R}}}\) and every \(A \in {{\mathscr {B}}}({\mathbb R})\), otherwise the chain is said to be transient.

If a Markov chain \(\{X_n\}\) for every \(x \in {{\mathbb {R}}}\) and every \(A \in {{\mathscr {B}}}({{\mathbb {R}}})\) satisfies the somewhat stronger condition that

$$\begin{aligned} {P_x\left\{ \sum _{n=1}^\infty \varvec{I}_A(X_n)=\infty \right\} =1,} \end{aligned}$$

then the chain is said to be Harris recurrent, where \(\varvec{I}_A(x)\) is an indicator function, defined as

$$ I_{A} (x) = \left\{ {\begin{array}{*{20}c} {1,} & {x \in A,} \\ {0,} & {x \notin A.} \\ \end{array} } \right. $$
(A1)

The Markov chain \(\{X_n\}\) is a Feller chain if the function \({{\mathbb {E}}}[g(X_1)|X_0=x]\) is continuous for every bounded continuous function g on the state space \({{\mathbb {R}}}\), in the sense that the transition probability kernel maps bounded and continuous functions to continuous functions.

(b) Mean drift criteria

The mean drift criteria is the fundamental method to prove the stochastic stability of the Markov chains and the following Lemmas [34, 35] will be used in the current study, where the function V of the following Lemmas is usually called a Lyapunov function.

Lemma 1

Suppose \(\{X_n\}\) is a \(\nu \)-irreducible Markov chain, then the chain \(\{X_n\}\) is transient if and only if there exists a bounded non-negative function V and a set \(C \in {{\mathscr {B}}}({\mathbb R})\) such that for all \(x \notin C\),

$$\begin{aligned} \bigtriangleup V(x)={{\mathbb {E}}}[V(X_1)|X_0=x]-V(x) \ge 0, \end{aligned}$$

and

$$\begin{aligned} D=\{V(x)> \sup _{y \in C} V(y)\} \in {{\mathscr {B}}}({{\mathbb {R}}}). \end{aligned}$$

Lemma 2

Assume that \(\{X_n\}\) is a \(\nu \)-irreducible Feller Markov chain and that the support of \(\nu \) is not empty. If there exists a function V(x) with the property that \(V(x) \rightarrow \infty \) as \(|x|\rightarrow \infty \), and a compact set C such that

$$\begin{aligned} \bigtriangleup V(x) \le 0, \end{aligned}$$

for all \(x \notin C\), then \(\{X_n\}\) is Harris recurrent.

Lemma 3

Assume that \(\{X_n\}\) is a \(\nu \)-irreducible Feller Markov chain and that the support of \(\nu \) is not empty. If there exists a function \(V: {{\mathbb {R}}}\rightarrow [0, \infty )\), a compact set \(C\in {\mathscr {B}}({{\mathbb {R}}}), b<\infty \), and \(\epsilon >0\), such that

$$\begin{aligned} {\bigtriangleup V(x) \le -\epsilon +b\varvec{I}_C(x),} \end{aligned}$$
(A2)

for all \(x \in {\mathbb {R}}\), where \(\varvec{I}_C(x)\) is an indicator function on set C and defined as Eq. (A1), then \(\{X_n\}\) is positive Harris recurrent.

In detail, Eq. (A2) represents two different criteria, one on and one off the set C, and can equivalently be expressed by requiring that

$$ \left\{ {\begin{array}{*{20}c} {\Delta V(x) \le - \epsilon ,} & {x \notin C,} \\ {\Delta V(x) \le b,} & {x \in C.} \\ \end{array} } \right. $$

Appendix 2: Calculation of \({\mathbb {E}}(X_n)\) and \(\mathrm{Var}(X_n)\) in Eq. (6)

To account for the expectation and variance of \(X_n\), we need to characterize the probability of \(X_n\) for all possible \(2^{n-1}\) results after n times injection. In fact, if only the first dose is taken among n injections, then

$$\begin{aligned} P\{X_n=C_0-(n-1)K\tau \}=\prod _{i=2}^n(1-a_i). \end{aligned}$$

If \(m+1\) times dose taking are involved, let us denote these by \(i_{m1}, i_{m2}, \ldots , i_{mm}\) except the first dose, where \(m=1, 2, \ldots ,n-1\) respectively, then

$$\begin{aligned} P\{X_n= & {} C_0-(n-1)K\tau +\sum _{j=1}^m\frac{D_{i_{mj}}}{V}\}\\= & {} \prod _{j=1}^ma_{i_{mj}}\prod _{i\ne i_{mj}}(1-a_i). \end{aligned}$$

Utilizing the definitions of the expectation and variance, we have

$$\begin{aligned} {\mathbb {E}}(X_n)=&(C_0-(n-1)K\tau )\prod _{i=2}^n(1-a_i) \\&+\sum _{i_{11}=2}^n \left( C_0-(n-1)K\tau +\frac{D_{i_{11}}}{V}\right) a_{i_{11}}\prod _{i\ne i_{11}}(1-a_i) \\&+\sum _{i_{21}}\sum _{i_{22}\ne i_{21}}\left( C_0-(n-1)K\tau +\frac{D_{i_{21}}}{V}+\frac{D_{i_{22}}}{V} \right) a_{i_{21}}a_{i_{22}}\\&\prod _{i\ne i_{2 \cdot }}(1-a_i) +\dots \\ =&C_0-(n-1)K\tau +\sum _{i=2}^na_i\frac{D_i}{V}, \end{aligned}$$

and

$$\begin{aligned} \mathrm{Var}(X_n)=&\mathrm{Var}(X_{n-1})+ \mathrm{Var}\left( A_n\frac{D_n}{V}\right) \\ =&\mathrm{Var}(X_{n-1}) + \frac{D_n^2}{V^2}a_n(1-a_n) \\ =&\left( \mathrm{Var}(X_{n-2})+\frac{D_{n-1}^2}{V^2}a_{n-1}(1-a_{n-1}) \right) \\&+ \frac{D_n^2}{V^2}a_n(1-a_n) \\ =&\cdots \\ =&\sum _{i=2}^n\frac{D_i^2}{V^2}a_i(1-a_i), \end{aligned}$$

where \(\mathrm{Var}(A_n)=a_n(1-a_n)\) is used.

Appendix 3: Proof of Theorem 1

Proof

If \(X_n {\mathop {\rightarrow }\limits ^{\text {d}}}X\) for some random variable X, due to the independence of \(F_n, X_n, n=2,3, \ldots \), we have \((F_{n-1}, X_{n-1}){\mathop {\rightarrow }\limits ^{\text {d}}}(F, X)\). This implies that

$$\begin{aligned} X_{n-1}+F_{n-1}{\mathop {\rightarrow }\limits ^{\text {d}}}X+F. \end{aligned}$$

Consequently, X satisfies the stochastic equation

$$\begin{aligned} X {\mathop {=}\limits ^{\text {d}}} X+F, \end{aligned}$$
(C1)

which results in \(F=0\) with probability 1, contradicting to the distribution of \(F_n\) depicted by Eq. (10) and therefore X does not exist. \(\square \)

Appendix 4: Proof of Theorem 2

Proof

The state space of the chain \({{\mathbb {R}}}^+\) is a countable set, so \(\nu \) can be chosen as the counting measure [34]. Since all states are in a communicating class, the Markov chain therefore has the property of \(\nu \)-irreducibility.

Define a Lyapunov function as

$$\begin{aligned} V(x)=\mathrm{e}^{-x},\,\, x \in {{\mathbb {R}}^+}. \end{aligned}$$

It is obvious to see that \(V(x)>0\), and \(V(x) \le 1, \forall x \in {{\mathbb {R}}^+}\). Then with the help of [24], we can calculate \(\bigtriangleup V(x)\) as follows:

$$\begin{aligned} \bigtriangleup V(x) = \mathrm{e}^{K\tau -x} (1-\mathrm{e}^{-\frac{D}{V}})\left( -q+\frac{1-\mathrm{e}^{-K\tau }}{1-\mathrm{e}^{-\frac{D}{V}}}\right) . \end{aligned}$$

Denote

$$\begin{aligned} \mu =\frac{1-\mathrm{e}^{-K\tau }}{1-\mathrm{e}^{-\frac{D}{V}}}, \end{aligned}$$

then we have \(\bigtriangleup V(x) \ge 0\) if condition of \(q \le \mu \) is satisfied. We further choose \(C=[m, +\infty ) \subseteq \mathbb R^+\) and \(m>0\), then the monotonicity of function V(x) can ensure the following

$$\begin{aligned} D=\{V(x)> \sup _{y \in C} V(y)\}=(0, m) \in {{\mathscr {B}}}({\mathbb R^+}) \end{aligned}$$

to be valid.

In particular, if \(K\tau \ge \frac{D}{V}\), then \(\mu \) is always greater than and equal to unity, so no matter what the value of \(q \in [0, 1]\) is, then the chain is transient; while conversely, the chain is transient provided that condition of \(q\le \mu \) is satisfied since \(\mu <1\) at this case. In summary, we conclude that the chain is transient when \(q \in [0, \min \{\mu , 1\}]\). Therefore, according to Lemma 1 the proof is completed. \(\square \)

Appendix 5: Calculation of \({\mathbb {E}}(X_n)\) and \(\mathrm{Var}(X_n)\) in Eq. (14)

Firstly, we consider the distribution of \(X_n\). To detail \(X_2\), we have

$$\begin{aligned} X_2=C_0\mathrm{e}^{-k_{el}\tau }+A_2\frac{D_2}{V} \end{aligned}$$

with \(A_2\sim b(1, a_2)\), and the distribution is calculated as

$$\begin{aligned} P\{X_2=C_0\mathrm{e}^{-k_{el}\tau }\}=1-a_2, \,\,P\left\{ X_2=C_0\mathrm{e}^{-k_{el}\tau }+\frac{D_2}{V}\right\} =a_2.\end{aligned}$$

For the concentration \(X_3\), it should have 4 results. That is, if only the first dose is taken, then

$$\begin{aligned} P\{X_3=C_0\mathrm{e}^{-2k_{el}\tau }\}=(1-a_2)(1-a_3). \end{aligned}$$

If drug intake is 2 times, then

$$\begin{aligned}&P\left\{ X_3=C_0\mathrm{e}^{-2k_{el}\tau }+\frac{D_2}{V}\mathrm{e}^{-k_{el}\tau }\right\} =a_2(1-a_3), \,\, \\&P\left\{ X_3=C_0\mathrm{e}^{-2k_{el}\tau }+\frac{D_3}{V}\right\} =(1-a_2)a_3. \end{aligned}$$

If taking 3 times, then

$$\begin{aligned} P\left\{ X_3=C_0\mathrm{e}^{-2k_{el}\tau }+\frac{D_2}{V}\mathrm{e}^{-k_{el}\tau }+\frac{D_3}{V}\right\} =a_2a_3. \end{aligned}$$

Similarly, the concentration \(X_n\) should have \(2^{n-1}\) results, and we can also account for the distribution as follow. If only the first dose is taken among n injections, then

$$\begin{aligned} P\{X_n=C_0\mathrm{e}^{-(n-1)k_{el}\tau }\}=\prod _{i=2}^n(1-a_i). \end{aligned}$$

If \(m+1\) times dose taking are involved, let us denote these by \(i_{m1}, i_{m2}, \ldots , i_{mm}\) except the first dose, where \(m=1, 2, \ldots ,n-1\), respectively, then

$$\begin{aligned}&P\left\{ X_n=C_0\mathrm{e}^{-(n-1)k_{el}\tau }+\sum _{j=1}^m\frac{D_{i_{mj}}}{V}\mathrm{e}^{-(n-{i_{mj}})k_{el}\tau }\right\} \\&\quad =\prod _{j=1}^ma_{i_{mj}}\prod _{i\ne i_{mj}}(1-a_i). \end{aligned}$$

Thus, it follows from the expectation of a discrete random variable that we obtain

$$\begin{aligned}&{\mathbb {E}}(X_n)\\&\quad = C_0\mathrm{e}^{-(n-1)k_{el}\tau }\prod _{i=2}^n(1-a_i) \\&\qquad +\sum _{i_{11}=2}^n \left( C_0\mathrm{e}^{-(n-1)k_{el}\tau } +\frac{D_{i_{11}}}{V}\mathrm{e}^{-(n-i_{11})k_{el}\tau } \right) a_{i_{11}}\\&\quad \prod _{i\ne i_{11}}(1-a_i) \\&\qquad +\sum _{i_{21}}\sum _{i_{22}\ne i_{21}}\left( C_0\mathrm{e}^{-(n-1)k_{el}\tau } +\frac{D_{i_{21}}}{V}\mathrm{e}^{-(n-i_{21})k_{el}\tau }\right. \\&\qquad \left. +\frac{D_{i_{22}}}{V}\mathrm{e}^{-(n-i_{22})k_{el}\tau } \right) \\&\quad a_{i_{21}}a_{i_{22}}\prod _{i\ne i_{2 \cdot }}(1-a_i) + \cdots \\&\quad = C_0\mathrm{e}^{-(n-1)k_{el}\tau }+\sum _{i=2}^na_i\frac{D_i}{V}\mathrm{e}^{-(n-i)k_{el}\tau }. \end{aligned}$$

From Eq. (13), we can obtain the variance as

$$\begin{aligned} \mathrm{Var}(X_n)=&\mathrm{Var}(X_{n-1})\mathrm{e}^{-2k_{el}\tau }+ \mathrm{Var}(A_n\frac{D_n}{V}) \\ =&\mathrm{e}^{-2k_{el}\tau }\mathrm{Var}(X_{n-1})+ \frac{D_n^2}{V^2}a_n(1-a_n) \\ =&\mathrm{e}^{-2k_{el}\tau }\left( \mathrm{e}^{-2k_{el}\tau }\mathrm{Var}(X_{n-2})\right. \\&+\left. \frac{D_{n-1}^2}{V^2}a_{n-1}(1-a_{n-1}) \right) + \frac{D_n^2}{V^2}a_n(1-a_n) \\ =&\cdots \\ =&\sum _{i=2}^n\frac{D_i^2}{V^2}a_i(1-a_i)\mathrm{e}^{-2(n-i)k_{el}\tau }. \end{aligned}$$

Appendix 6: Proof of Theorem 3

Proof

If \(X_n {\mathop {\rightarrow }\limits ^{\text {d}}}X\) for a certain random variable X, and \(F_n, X_n, n=2,3, \ldots \) are independent, similar to the process of the Theorem 1, we can conclude that X satisfies the following stochastic equation

$$\begin{aligned} X {\mathop {=}\limits ^{\text {d}}} fX+F. \end{aligned}$$
(F1)

Intuitively, the solution to Eq. (16) can be obtained by the backward iteration. Accordingly, after n iterations, one can obtain

$$\begin{aligned} X_n {\mathop {=}\limits ^{\text {d}}} \sum _{i=1}^{n-1} f^{i-1}F_i + f^{n-1}X_1, \end{aligned}$$

for \(n=2, 3, \ldots \). Set \(X_1^*=0\), then \(X_n^* {\mathop {=}\limits ^{\text {d}}} \sum _{i=1}^{n-1} f^{i-1}F_i.\) So we have

$$\begin{aligned} X_n {\mathop {=}\limits ^{\text {d}}} X_n^* + f^{n-1}X_1. \end{aligned}$$

Note that \(X_n^*\) is the partial sum of the infinite series

$$\begin{aligned} X^* {\mathop {=}\limits ^{\bigtriangleup }} \sum _{i=1}^\infty f^{i-1}F_i, \end{aligned}$$
(F2)

so a sufficient condition for \(X_n^*\) to converge in distribution is convergent with probability 1 of the series (F2).

Considering the limit of the general term of series (F2), we can get

$$\begin{aligned} \lim _{i\rightarrow \infty } |f^{i-1}F_i|^{\frac{1}{i}}= \mathrm{exp} \left\{ log(f) + \lim _{i\rightarrow \infty } \left( \frac{1}{i}log|F_i|\right) \right\} =f<1, \end{aligned}$$

so the convergence of series (F2) is proved by using Cauchy discriminant. Naturally, \(X_n^*\) converges in distribution.

For convenience, we rewrite \(X_n=X_n(X_1)\). By iterating Eq. (16) starting with different \(X_1', X_1''\), we obtain

$$\begin{aligned} X_n(X_1')-X_n(X_1'')= f^{n-1}(X_1'-X_1''). \end{aligned}$$
(F3)

If \({\mathbb {E}}(X_1)\) exists, \({\mathbb {E}}(f^{n-1}X_1)=f^{n-1}\mathbb E(X_1)\rightarrow 0, \mathrm{Var}(f^{n-1}X_1)=f^{2(n-1)} \mathrm{Var}(X_1)\rightarrow 0\) when \(n \rightarrow \infty \). By central limit theorem, we have

$$\begin{aligned} f^{n-1}X_1 {\mathop {\rightarrow }\limits ^{\text {d}}}0, \end{aligned}$$

so the limit distribution X exists. By Eq. (F3), we see that if \(X_n(X_1) {\mathop {\rightarrow }\limits ^{\text {d}}}X\) for one \(X_1\), then for all \(X_1\). In particular, \(X_n(X) {\mathop {=}\limits ^{\text {d}}} X \), and therefore X is unique in distribution.

Then, what we want to show is the existence and convergence of the high order moments of X determined by Eq. (F1). Set \(\Vert X\Vert _p=({{\mathbb {E}}}\Vert X\Vert ^p)^{1/p}\), we have

$$\begin{aligned} \sum \Vert f^{i-1}F_i\Vert _p= & {} \sum \Vert f^{i-1}\Vert _p\Vert F_i\Vert _p\\= & {} \sum f^{i-1}\Vert F_i\Vert _p < \infty , \end{aligned}$$

so series (F2) converges in the sense of \(p^{th}\) mean, then \({{\mathbb {E}}}(X_n^*)^p \rightarrow {{\mathbb {E}}}(X^*)^p\), and \({\mathbb E}X_n^p \rightarrow {{\mathbb {E}}}X^p\) in case \(X_1=0\) with probability 1, since \(X_n(0) {\mathop {=}\limits ^{\text {d}}} X_n^*\). For general \(X_1\), it follows by Eq. (F3) that

$$\begin{aligned} {{\mathbb {E}}}|X_n(X_1)-X_n(0)|^p \le f^{n-1}{{\mathbb {E}}}|X_1|^p, \end{aligned}$$

which vanishes as \(n \rightarrow \infty \). Consequently, \({\mathbb E}(X_n(X_1))^p - {{\mathbb {E}}}(X_n(0))^p \rightarrow 0\) and \({\mathbb E}(X_n(X_1))^p \rightarrow {{\mathbb {E}}}X^p\) as \(n \rightarrow \infty \). From Eq. (F1), it is clear that \({{\mathbb {E}}}X^k\) can be formulated as (17). So this proof is completed. \(\square \)

Appendix 7: Proof of Theorem 4

Proof

The proof is divided into two steps. We will prove the Harris recurrence firstly and then prove the positivity.

According to Lemma 2, it suffices to check the following three conditions: Feller chain, \(\nu \)-irreducibility and the drift condition.

Let’s verify the Feller chain first. Since

$$\begin{aligned} {{\mathbb {E}}}[g(X_2)|X_1=x]={{\mathbb {E}}}_x[g(X_2)]={\mathbb E}[g(fx+F_2)],\,\, x \in {{\mathbb {R}}}, \end{aligned}$$

so the property of Feller chain is immediately obtained by an application of dominated convergence.

The property of \(\nu \)-irreducibility can be similarly obtained as that of zero-order process, so it remains to verify the drift condition.

Define a Lyapunov function as

$$\begin{aligned} V(x)=|x|+1,\,\, x \in {{\mathbb {R}}}. \end{aligned}$$
(G1)

It is obvious to see that \(V(x)>0\), and \(V(x) \rightarrow \infty \), as \( |x| \rightarrow \infty \), and

$$\begin{aligned} \bigtriangleup V(x)&= {{\mathbb {E}}}(|fx+F_2|+1)-V(x) \\&\le f|x|+{{\mathbb {E}}}|F_2|+1-V(x) \\&= (f-1)V(x)+(1-f)\left( q\frac{D}{V}\frac{1}{1-f}+1\right) . \end{aligned}$$

Choose \(C=[-m, m] \subseteq {\mathbb {R}}\) and \(m>0\) such that \(\nu (C)>0\) and \(q\frac{D}{V}\frac{1}{1-f}<m\). Accordingly, when \(x \notin C\) we have

$$\begin{aligned} \bigtriangleup V(x) < (f-1)V(x)+(1-f)(|x|+1)=0, \end{aligned}$$

implying that the drift condition is satisfied. So the Harris recurrence is proved.

We then show the positive Harris recurrence. We still use the function defined as Eq. (G1) and it is already known that \(\bigtriangleup V(x)<0\) when \(x \notin C\). By the density of rational numbers, there exists \(\epsilon >0\) such that \(\bigtriangleup V(x) \le -\epsilon \) when \(x \notin C\).

When \(x \in C\), i.e., \(|x|<m\), we have

$$\begin{aligned} \bigtriangleup V(x)&\le f|x|+{{\mathbb {E}}}[|F_2|]+1-V(x) \\&\le (f-1)m+q\frac{D}{V} {\mathop {=}\limits ^{\bigtriangleup }}b, \end{aligned}$$

which means that \(\bigtriangleup V(x) \le b\) when \(x \in C\). So this completes the argument. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, D., Wu, X. & Tang, S. Statistical analysis of one-compartment pharmacokinetic models with drug adherence. J Pharmacokinet Pharmacodyn 49, 209–225 (2022). https://doi.org/10.1007/s10928-021-09794-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10928-021-09794-5

Keywords

Navigation