Skip to main content
Log in

Regression Analysis of Stochastic Fatigue Crack Growth Model in a Martingale Difference Framework

  • ORIGINAL ARTICLE
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

In the present paper, we are mainly concerned with the degradation mechanism that arises in fatigue crack growth (FCG). The crack evolution mechanism is modeled by a first order stochastic differential system, composed by a deterministic FCG equation perturbed by a stochastic process. The main purpose is to investigate the estimators of the model parameters and establish some asymptotic properties by transforming the initial equation into a regression model. To this purpose, least squares estimation (LSE) is applied in the framework of the errors being martingale differences. The parametric conditional LSE are proved to be consistent. Our results are obtained in the general case and specified for the linear case with focus on a particular application model derived from the most widely used FCG law in fracture mechanics, i.e., the Paris model. We derive the asymptotic normality of the LSE for the nonlinear case with discussion for the linear case. Finally, we provide a numerical example to illustrate the performance of the proposed methodology, on a particular version of the stochastic model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Anderson TW, Taylor JB (1979) Strong consistency of least squares estimates in dynamic models. Ann. Stat. 7(3):484–489

    MathSciNet  MATH  Google Scholar 

  2. Banerjee P, Karpenko O, Udpa L, Haq M, Deng Y (2018) Prediction of impact-damage growth in GFRP plates using particle filtering algorithm. Compos. Struct. 194:527–536

    Google Scholar 

  3. Bellec PC (2018) Sharp oracle inequalities for least squares estimators in shape restricted regression. Ann. Stat. 46(2):745–780

    MathSciNet  MATH  Google Scholar 

  4. Abdessalem AB, Azaïs R, Touzet-Cortina M, Gégout-Petit A, Puiggali M (2016) Stochastic modelling and prediction of fatigue crack propagation using piecewise-deterministic markov processes. Proc. Inst. Mech. Eng. Part O: J. Risk Reliab. 230(4):405–416

    Google Scholar 

  5. Bickel PJ, Klaassen CAJ, Ritov Y, Wellner JA (1998) Efficient and Adaptive Estimation for Semiparametric Models. Springer, New York. Reprint of the 1993 original

  6. Bindele HF (2015) The signed-rank estimator for nonlinear regression with responses missing at random. Electron. J. Stat. 9(1):1424–1448

    MathSciNet  MATH  Google Scholar 

  7. Chen F, Zou B, Chen N (2018) The consistency of least-square regularized regression with negative association sequence. Int. J. Wavelets Multiresolut. Inf. Process. 16(3):1850019–1850020

    MathSciNet  MATH  Google Scholar 

  8. Chen X (2012) Asymptotic properties for estimates of nonparametric regression model with martingale difference errors. Statistics 46(5):687–696

    MathSciNet  MATH  Google Scholar 

  9. Chen Z, Wang H, Wang X (2016) The consistency for the estimator of nonparametric regression model based on martingale difference errors. Stat. Pap. 57(2):451–469

    MathSciNet  MATH  Google Scholar 

  10. Cheng R (2017) Non-standard Parametric Statistical Inference. Oxford University Press, Oxford

    MATH  Google Scholar 

  11. Chiquet J, Limnios N (2008) A method to compute the transition function of a piecewise deterministic Markov process with application to reliability. Stat. Probab. Lett. 78(12):1397–1403

    MathSciNet  MATH  Google Scholar 

  12. Chiquet J, Limnios N (2013) Dynamical systems with semi-Markovian perturbations and their use in structural reliability. In: Stochastic Reliability and Maintenance Modeling, Volume 9 of Springer Series of Reliability and Engineering, pp 191–218. Springer, London

  13. Chiquet J, Limnios N, Eid M (2009) Piecewise deterministic markov processes applied to fatigue crack growth modelling. J. Stat. Plan. Inference 139(5):1657–1667

    MathSciNet  MATH  Google Scholar 

  14. Christopeit N, Helmes K (1980) Strong consistency of least squares estimators in linear regression models. Ann. Stat. 8(4):778–788

    MathSciNet  MATH  Google Scholar 

  15. Cinlar E (1969) Markov renewal theory. Adv. Appl. Probab. 1(2):123–187

    MathSciNet  MATH  Google Scholar 

  16. Collipriest JE (1972) An experimentalist’s view of the surface flaw problem. In: Swedlow EJL (ed) Physical Problems and Computational Solutions. American Society of Mechanical Engineers, New York, pp 43–62

    Google Scholar 

  17. Davis MHA (1984) Piecewise-deterministic Markov processes: a general class of nondiffusion stochastic models. J. Roy. Stat. Soc. Ser. B 46(3):353–388

    MathSciNet  MATH  Google Scholar 

  18. Davis MHA (1993) Markov Models and Optimization. Volume 49 of Monographs on Statistics and Applied Probability. Chapman & Hall, London

    MATH  Google Scholar 

  19. Delgado MA (1992) Semiparametric generalized least squares in the multivariate nonlinear regression model. Econ. Theory 8(2):203–222

    MathSciNet  Google Scholar 

  20. Donaldson JR, Schnabel RB (1987) Computational experience with confidence regions and confidence intervals for nonlinear least squares. Technometrics 29(1):67–82

    MathSciNet  MATH  Google Scholar 

  21. Draper NR, Smith H (1998) Applied Regression Analysis, 3rd edn. Wiley, Hoboken

    MATH  Google Scholar 

  22. Eicker F (1963) Über die konsistenz von parameterschätzfunktionen für ein gemischtes zeitreihen-regressionsmodell. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1(5):456–477

    MathSciNet  MATH  Google Scholar 

  23. Godambe VP, Heyde CC (1987) Quasi-likelihood and optimal estimation. Int. Stat. Rev. 55:231–244

    MathSciNet  MATH  Google Scholar 

  24. Grenander U, Rosenblatt M (1957) Statistical Analysis of Stationary Time Series. Wiley, New York

    MATH  Google Scholar 

  25. Grigoriev Y, Ivanov AV (1993) Asymptotic expansions for quadratic functionals of the least squares estimator of a nonlinear regression parameter. Math. Methods Stat. 2(4):269–294

    MathSciNet  MATH  Google Scholar 

  26. Hall P, Heyde CC (1980) Martingale Limit Theory and Its Application. Probability and Mathematical Statistics. Academic Press Inc, New York

    MATH  Google Scholar 

  27. Heyde CC (1997) Quasi-Likelihood and Its Application: A General Approach to Optimal Parameter Estimation. Springer, New York

    MATH  Google Scholar 

  28. Howard R (1971) Dynamic Probabilistic Systems: Volume I: Markov Models. Series in Decision and Control. Wiley, Hoboken

    MATH  Google Scholar 

  29. Huber PJ (1973) Robust regression: asymptotics, conjectures and Monte Carlo. Ann. Stat. 1:799–821

    MathSciNet  MATH  Google Scholar 

  30. Ibragimov R, Phillips PCB (2008) Regression asymptotics using martingale convergence methods. Econ. Theory 24(4):888–947

    MathSciNet  MATH  Google Scholar 

  31. Jacob C (2010) Conditional least squares estimation in nonstationary nonlinear stochastic regression models. Ann. Stat. 38(1):566–597

    MathSciNet  MATH  Google Scholar 

  32. Jacobsen M (2006) Point process theory and applications: marked point and piecewise deterministic processes. In: Probability and its Applications, Birkhäuser Boston Inc, Boston

  33. Jennrich RI (1969) Asymptotic properties of non-linear least squares estimators. Ann. Math. Stat. 40(2):633–643

    MathSciNet  MATH  Google Scholar 

  34. Koroliuk VS, Limnios N (2005) Stochastic Systems in Merging Phase Space. World Scientific, London

    MATH  Google Scholar 

  35. Lalam N, Jacob C (2004) Estimation of the offspring mean in a supercritical or near- critical size-dependent branching process. Adv. Appl. Probab. 36:582–601

    MathSciNet  MATH  Google Scholar 

  36. Lai TL (1994) Asymptotic properties of nonlinear least squares estimates in stochastic regression models. Ann. Stat. 22(4):1917–1930

    MathSciNet  MATH  Google Scholar 

  37. Lai TL, Robbins H (1981) Consistency and asymptotic efficiency of slope estimates in stochastic approximation schemes. Probab. Theory Relat. Fields 56(3):329–360

    MathSciNet  MATH  Google Scholar 

  38. Lai TL, Wei CZ (1982) Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. Ann. Stat. 10(1):154–166

    MathSciNet  MATH  Google Scholar 

  39. Lai TL, Robbins H, Wei CZ (1978) Strong consistency of least squares estimates in multiple regression. Proc. Nat. Acad. Sci. U.S.A. 75(7):3034–3036

    MathSciNet  MATH  Google Scholar 

  40. Lai TL, Robbins H, Wei CZ (1979) Strong consistency of least squares estimates in multiple regression. II. J. Multivariate Anal. 9(3):343–361

    MathSciNet  MATH  Google Scholar 

  41. Lehmann EL, Casella G (1998) Theory of Point Estimation, 2nd edn. Springer Texts in Statistics. Springer, New York

    MATH  Google Scholar 

  42. Lehmann EL, Romano JP (2005) Testing Statistical Hypotheses, 3rd edn. Springer Texts in Statistics. Springer, New York

    MATH  Google Scholar 

  43. Li D, Tjøstheim D, Gao J (2016) Estimation in nonlinear regression with Harris recurrent Markov chains. Ann. Stat. 44(5):1957–1987

    MathSciNet  MATH  Google Scholar 

  44. Limnios N, Oprişan G (2001) Semi-Markov processes and reliability. Statistics for Industry and Technology, Birkhäuser Boston Inc, Boston, MA

  45. Lin YK, Yang JN (1985) A stochastic theory of fatigue crack propagation. Am. Inst. Aeronaut. Astron. 23(1):117–124

    MATH  Google Scholar 

  46. Lindsey JK (1996) Parametric Statistical Inference. Oxford Science Publications, Oxford University Press, New York

    MATH  Google Scholar 

  47. Liu X, Ouyang A, Yun Z (2018) Fuzzy weighted least squares support vector regression with data reduction for nonlinear system modeling. Math. Probl. Eng. Art. ID 7387650, 13

  48. Malinvaud E (1970) The consistency of nonlinear regressions. Ann. Math. Stat. 41(3):956–969

    MathSciNet  MATH  Google Scholar 

  49. Malinvaud E (1980) Statistical methods of econometrics, Volume 6 of Studies in Mathematical and Managerial Economics. North-Holland Publishing Co., Amsterdam, third edition. Translated from the French by Anne Silvey

  50. Miller S, Startz R (2019) Feasible generalized least squares using support vector regression. Econ. Lett. 175:28–31

    MathSciNet  MATH  Google Scholar 

  51. Nelson PI (1980) A note on strong consistency of least squares estimators in regression models with martingale difference errors. Ann. Stat. 8(5):1057–1064

    MathSciNet  MATH  Google Scholar 

  52. Øksendal B (2003) Stochastic Differential Equations: An Introduction with Applications. Hochschultext/Universitext. US, Government Printing Office

  53. Papamichail CA, Bouzebda S, Limnios N (2016) Reliability Calculus on Crack Propagation Problem with a Markov Renewal Process. Springer, New York, pp 343–378

    Google Scholar 

  54. Paris PC, Erdogan F (1963) A critical analysis of crack propagation laws. J. Fluids Eng. 85:528–533

    Google Scholar 

  55. Peng Z, Ying QG, Liao B, Ren XF (2018) Study of fatigue crack propagation behaviour for dual-phase X80 pipeline steel. Ironmak. Steelmak. 45(7):635–640

    Google Scholar 

  56. Pfanzagl J (1994) Parametric statistical theory. De Gruyter Textbook. Walter de Gruyter & Co., Berlin. With the assistance of R. Hamböker

  57. Prakasa Rao BLS (1984) The rate of convergence of the least squares estimator in a nonlinear regression model with dependent errors. J. Multivariate Anal. 14(3):315–322

    MathSciNet  MATH  Google Scholar 

  58. Prakasa Rao BLS (1986) Weak convergence of least squares process in the smooth case. Statistics 17(4):505–516

    MathSciNet  MATH  Google Scholar 

  59. Pronzato L (2009) Asymptotic properties of nonlinear estimates in stochastic models with finite design space. Stat. Probab. Lett. 79(21):2307–2313

    MathSciNet  MATH  Google Scholar 

  60. Pyke R (1961) Markov renewal processes: definitions and preliminary properties. In: The Annals of Mathematical Statistics, pp. 1231–1242

  61. Shklyar S (2018) Consistency of the total least squares estimator in the linear errors-in-variables regression. Mod. Stoch. Theory Appl. 5(3):247–295

    MathSciNet  MATH  Google Scholar 

  62. Shurenkov VM (1984) On the theory of markov renewal. Theory Probab. Appl. 29(2):247–265

    MATH  Google Scholar 

  63. Skouras K (2000) Strong consistency in nonlinear stochastic regression models. Ann. Stat. 28(3):871–879

    MathSciNet  MATH  Google Scholar 

  64. Sobczyk K (1993) Stochastic approach to fatigue: experiments, modelling, and reliability estimation. Springer, CISM International Centre for Mechanical Sciences Series

  65. Sobczyk K, Spencer B (2012) Random Fatigue: From Data to Theory. Elsevier, Amsterdam

    MATH  Google Scholar 

  66. Spencer BF, J (1993) Stochastic diffusion models for fatigue crack growth and reliability estimation. In K. Sobczyk, (Ed.), Stochastic Approach to Fatigue, Volume 334 of International Centre for Mechanical Sciences, pp 185–241, Springer, Vienna

  67. Tong H, Ng M (2018) Analysis of regularized least squares for functional linear regression model. J. Complex. 49:85–94

    MathSciNet  MATH  Google Scholar 

  68. van der Vaart AW (1998) Asymptotic Statistics. Volume 3 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge

    Google Scholar 

  69. Virkler DA, Hillberry BM, Goel PK (1978) The statistical nature of fatigue crack propagation. In: Technical report, School of Mechanical Engineering Purdue University West Lafayette, Indiana

  70. Šidák Z (1967) Rectangular confidence regions for the means of multivariate normal distributions. J. Am. Stat. Assoc. 62(318):626–633

    MathSciNet  MATH  Google Scholar 

  71. Wang J (1996) Asymptotics of least-squares estimators for constrained nonlinear regression. Ann. Stat. 24(3):1316–1326

    MathSciNet  MATH  Google Scholar 

  72. Wang X, Deng X, Hu S (2018) On consistency of the weighted least squares estimators in a semiparametric regression model. Metrika 81(7):797–820

    MathSciNet  MATH  Google Scholar 

  73. Wedderburn RWM (1974) Quasi-likelihood functions, generalized linear models, and the gauss-newton method. Biometrika 61(3):439–447

    MathSciNet  MATH  Google Scholar 

  74. Wu C-F (1981) Asymptotic theory of nonlinear least squares estimation. Ann. Stat. 9(3):501–513

    MathSciNet  MATH  Google Scholar 

  75. Zhang S, Miao Y, Xu X, Gao Q (2018) Limit behaviors of the estimator of nonparametric regression model based on martingale difference errors. J. Korean Stat. Soc. 47(4):537–547

    MathSciNet  MATH  Google Scholar 

  76. Zhou X-C, Lin J-G (2012) A wavelet estimator in a nonparametric regression model with repeated measurements under martingale difference error’s structure. Statist. Probab. Lett. 82(11):1914–1922

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are indebted to the Editor-in-Chief, Associate Editor and the referee for their very valuable comments and suggestions which led to a considerable improvement of the presentation of the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salim Bouzebda.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Proofs of Sections 3,  5

A Proofs of Sections 3,  5

This section is devoted to the detailed proofs of our results. The previously displayed notation continue to be used in the sequel.

1.1 Proof of Proposition 3.5

Similarly to Theorem 1 of [36], it suffices to show

$$\begin{aligned} \lim _{n \rightarrow \infty }\inf _{|\varvec{\theta }-\varvec{\theta }_{0} |\ge \delta }S_{n}(\varvec{\theta })-S_{n}(\varvec{\theta }_{0})=\infty ~~a.s., \end{aligned}$$

where

$$\begin{aligned} S_{n}(\varvec{\theta })-S_{n}(\varvec{\theta }_{0}) &= \sum _{k=1}^{n}\left( Y_{k}-f_{k}(\varvec{ \theta })\right) ^{2} -\sum _{k=1}^{n}\left( Y_{k}-f_{k}(\varvec{ \theta }_{0})\right) ^{2}\\ &= \sum _{k=1}^{n}d_{k}^{2}(\varvec{\theta }) -2\sum _{k=1}^{n}\eta _{k} d_{k}(\varvec{\theta })\\ & = D_n(\varvec{\theta }, \varvec{\theta _0} )-2L_{n}(\varvec{\theta }) \\ &= D_n(\varvec{\theta }, \varvec{\theta _0} )\left\{ 1-2\frac{L_{n}(\varvec{\theta })}{D_n(\varvec{\theta }, \varvec{\theta _0})}\right\} , \end{aligned}$$

with

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle Y_{k}-f_{k}(\varvec{ \theta })=Y_{k}-f_{k}(\varvec{ \theta }_{0})+f_{k}(\varvec{ \theta }_{0})-f_{k}(\varvec{ \theta })=\eta _{k}-d_{k}(\varvec{\theta }),\\ \displaystyle D_n(\varvec{\theta }, \varvec{\theta _0} )=\sum _{k=1}^n \Big (f_k(\varvec{\theta })-f_k(\varvec{\theta _0})\Big )^2 \text{ and }\\ \displaystyle L_{n}(\varvec{\theta })=\sum _{k=1}^{n} \eta _{k} d_{k}(\varvec{\theta }). \end{array}\right. } \end{aligned}$$

This implies that

$$\begin{aligned} \inf _{\Vert \varvec{\theta }-\varvec{\theta }_{0}\Vert \ge \delta }S_{n}(\varvec{\theta })-S_{n}(\varvec{\theta }_{0})\ge \inf _{\Vert \varvec{\theta }-\varvec{\theta }_{0}\Vert \ge \delta }D_n(\varvec{\theta }, \varvec{\theta _0} )\left\{ 1-2\sup _{\Vert \varvec{\theta }-\varvec{\theta }_{0}\Vert \ge \delta }\frac{|L_{n}(\varvec{\theta })|}{|D_n(\varvec{\theta }, \varvec{\theta _0} )|}\right\} . \end{aligned}$$

By application of Theorem 1 of [74] or Proposition 3.1 of [31], we should show that, for all \(\varvec{\theta } \ne \varvec{\theta _0}\),

$$\begin{aligned} D_n(\varvec{\theta }, \varvec{\theta _0} )=\sum _{k=1}^n(f_k(\varvec{\theta })-f_k(\varvec{\theta _0}))^2 \rightarrow \infty , \end{aligned}$$

which follows from Assumption A.3. It suffices to show that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{\Vert \varvec{\theta }-\varvec{\theta }_{0}\Vert \ge \delta }\frac{|L_{n}(\varvec{\theta })|}{|D_n(\varvec{\theta }, \varvec{\theta _0} )|}=0, ~~a.s. \end{aligned}$$
(A.1)

This result exactly follows from the strong law of large numbers for submartingales (Proposition 5.1 of [31]), applied to \(d_{k}(\varvec{\theta }), \Vert \varvec{\theta }-\varvec{\theta }_{0}\Vert \ge \delta ,\) by using the assumptions A.1-3. Thus the proof is complete. \(\square\)

1.2 Proof of Theorem 5.2

Following [74] and the proof of Proposition 6.1 of [31], we infer from the first order Taylor series expansion of \(\mathbf { S}_{n}^{\prime }(\widehat{\varvec{\theta }}_{n})\) at \(\varvec{\theta }_{0}\) that

$$\begin{aligned} {\mathbf {S}}_{n}^{\prime }(\widehat{\varvec{\theta }}_{n})=\mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0})+\mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) . \end{aligned}$$
(A.2)

Since \(\widehat{\varvec{\theta }}_{n}\) is an interior point of \(\varvec{\Theta }\) eventually \(\mathbf { S}_{n}^{\prime }(\widehat{\varvec{\theta }}_{n})=0\). With \({\mathbf {S}}_{n}^{\prime \prime }(\cdot )\) invertible in a neighborhood of \(\varvec{\theta }_{0}\), assumption B.1.iii, Eq. (A.2) implies

$$\begin{aligned} \left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) =-\left[ \mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\right] ^{-1}\mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0}), \end{aligned}$$

and also

$$\begin{aligned} \left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) =-\mathbf { M}_{n}\left[ \mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\mathbf { M}_{n}\right] ^{-1}{\mathbf {S}}_{n}^{\prime }(\varvec{\theta }_{0}). \end{aligned}$$

Hence for any \(p\times p\) matrix \(\Psi _{n}\), such that (A.1) in Jacob [31] is satisfied, we have

$$\begin{aligned} \Psi _{n}\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) =-\Psi _{n}\left[ \mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\right] ^{-1}\mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0}), \end{aligned}$$

and also

$$\begin{aligned} \Psi _{n}\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) =-\Psi _{n}\mathbf { M}_{n}\left[ \mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\mathbf { M}_{n}\right] ^{-1}{\mathbf {S}}_{n}^{\prime }(\varvec{\theta }_{0}). \end{aligned}$$
(A.3)

From

$$\begin{aligned} \mathbf {S}_{n}(\varvec{\theta })=\sum _{k=1}^{n}\left( f_k(\varvec{ \theta }_{0})+ \eta _k-f_{k}(\varvec{ \theta })\right) ^{2}, \end{aligned}$$

we obtain its first derivative

$$\begin{aligned} \mathbf { S}_{n}^{\prime }(\varvec{\theta })=-2\sum _{k=1}^{n}(f_{k}(\varvec{ \theta }_{0})+\eta _{k}-f_{k}(\varvec{ \theta }))\mathbf { f}^{\prime }_{k}(\varvec{\theta }), \end{aligned}$$
(A.4)

and then, the second derivative

$$\begin{aligned} {\mathbf {S}}_{n}^{\prime \prime }(\varvec{\theta })= 2\sum _{k=1}^{n}{\mathbf {f}}^{\prime }_{k}(\varvec{\theta })\mathbf { f}^{\prime }_{k}(\varvec{\theta })^{\top }-2\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-2\sum _{k=1}^{n}(f_{k}(\varvec{ \theta }_{0})-f_{k}(\varvec{ \theta }))\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }). \end{aligned}$$
(A.5)

So, from (A.4), \(\mathbf { S}_{n}^{\prime }({\varvec{\theta }})\) at \(\varvec{\theta }_{0}\) takes the value

$$\begin{aligned} \mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0})=-2\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}(\varvec{\theta }_{0}). \end{aligned}$$
(A.6)

Notice that the following terms

  1. 1.

    \(\displaystyle \left[ \sum _{k=1}^{n}\mathbf { f}^{\prime }_{k}(\widetilde{\varvec{\theta }}_{n})\mathbf { f}^{\prime }_{k}(\widetilde{\varvec{\theta }}_{n})^{\top }\right] {\mathbf {M}}_{n} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} {\mathbf {I}} ~~\text{ according } \text{ to } \text{ assumption } \text{ B.1.(iii) },\)

  2. 2.

    \(\displaystyle \left[ \sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}(\widetilde{\varvec{\theta }}_{n})\right] {\mathbf {M}}_{n} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0 , ~~\text{ according } \text{ to } \text{ assumption } \text{ B.1.(i) },\)

  3. 3.

    \(\displaystyle \left[ \sum _{k=1}^{n}\Big (f_{k}(\varvec{ \theta }_{0})-f_{k}(\widetilde{\varvec{\theta }}_{n})\Big )\mathbf { f}^{\prime \prime }_{k}(\widetilde{\varvec{\theta }}_{n})\right] {\mathbf {M}}_{n} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0 ~~\text{ according } \text{ to } \text{ assumption } \text{ B.1.(ii) }.\)

Let us define the matrix \({\varvec{K}}_n\) by

$$\begin{aligned} {\varvec{K}}_n=\frac{1}{2}\mathbf { S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\mathbf { M}_{n}= & {} \left[ \sum _{k=1}^{n}\mathbf { f}^{\prime }_{k}(\widetilde{\varvec{\theta }}_{n})\mathbf { f}^{\prime }_{k}(\widetilde{\varvec{\theta }}_{n})^{\top }\right] {\mathbf {M}}_{n}-\left[ \sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}(\widetilde{\varvec{\theta }}_{n})\right] {\mathbf {M}}_{n}\\&-\left[ \sum _{k=1}^{n}(f_{k}(\varvec{ \theta }_{0})-f_{k}(\widetilde{\varvec{\theta }}_{n}))\mathbf { f}^{\prime \prime }_{k}(\widetilde{\varvec{\theta }}_{n})\right] {\mathbf {M}}_{n}. \end{aligned}$$

We infer that we have

$$\begin{aligned} \lim _{n\rightarrow \infty }{\varvec{K}}_n{\mathop {=}\limits ^{{\mathbb {P}}}}{\mathbf {I}}. \end{aligned}$$

An application of Slutsky’s theorem in the Eq. (A.3), since, as \(n\rightarrow \infty\),

$$\begin{aligned} \left[ \mathbf {S}_{n}^{\prime \prime }(\widetilde{\varvec{\theta }}_{n})\mathbf { M}_{n}\right] ^{-1} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} \left[ \frac{1}{2}{\mathbf {I}}^{-1}\right] , \end{aligned}$$

implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\Psi _{n}\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) {\mathop = \limits ^{d}}\lim _{n\rightarrow \infty }\left\{ -\Psi _{n}{\mathbf {M}}_{n}\left[ \frac{1}{2}\mathbf { I}^{-1} \right] \mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0})\right\} . \end{aligned}$$
(A.7)

Then we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\Psi _{n}\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) {\mathop {=}\limits ^{d}}-\lim _{n\rightarrow \infty }\frac{1}{2}\Psi _{n}{\mathbf {M}}_{n}\mathbf { S}_{n}^{\prime }(\varvec{\theta }_{0}), \end{aligned}$$
(A.8)

and

$$\begin{aligned} \lim _{n}\Psi _{n}\left( \widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}\right) {\mathop =\limits ^{d}}\lim _{n\rightarrow \infty }\Psi _{n}\mathbf { M}_{n}\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}(\varvec{\theta }_{0}), \end{aligned}$$
(A.9)

which from the assumption B.2 that

$$\begin{aligned} \lim _{n\rightarrow \infty } \Psi _{n}\mathbf { M}_{n}\sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime }_{k}({\varvec{\theta }}_{0}) \end{aligned}$$

exists in distribution, this suffices to complete the proof of Theorem 5.2. \(\square\)

1.3 Proof of Theorem 6.3

This demonstration follows closely the proof of Theorem 2 of Lai [36]. By (A.4) and since \(\mathbf { S}_{n}^{\prime }(\widehat{\varvec{\theta }}_{n})=0\), we have

$$\begin{aligned} 0=\frac{-1}{2}\mathbf { S}_{n}^{\prime }(\widehat{\varvec{\theta }}_{n})=\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}(\widehat{\varvec{\theta }}_{n})-\sum _{k=1}^{n}\mathbf { f}^{\prime }_{k}(\widehat{\varvec{\theta }}_{n})\big (f_{k}(\widehat{\varvec{\theta }}_{n})-f_{k}(\varvec{ \theta }_{0})\big ). \end{aligned}$$
(A.10)

An application of the mean value theorem implies readily that

$$\begin{aligned} \frac{{\mathbf {f}}_{k}^{\prime }(\widehat{\varvec{\theta }}_{n})-{\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})}{\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}}={\mathbf {f}}^{\prime \prime }_{k}({\varvec{\theta }}_{1}), \end{aligned}$$

which gives

$$\begin{aligned} {{\mathbf {f}}_{k}^{\prime }(\widehat{\varvec{\theta }}_{n})={\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})}+{\mathbf {f}}^{\prime \prime }_{k}({\varvec{\theta }}_{1})({\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}}), ~~ \text{ for } ~~ {\varvec{\theta }}_{1} \in ({\varvec{\theta }}_{0}, \widehat{\varvec{\theta }}_{n}), \end{aligned}$$

and so

$$\begin{aligned} \sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}(\widehat{\varvec{\theta }}_{n})=\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}({\varvec{\theta }}_{0})+ \sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}({\varvec{\theta }}_{1})(\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}). \end{aligned}$$
(A.11)

Again from the mean value theorem, one can see that

$$\begin{aligned} \mathbf {f}_{k}^{\prime }(\widehat{\varvec{\theta }}_{n}) &= \mathbf { f}_{k}^{\prime }(\varvec{ \theta }_{0})+\mathbf { f}^{\prime \prime }_{k}({\varvec{\theta }}_{2})({\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}}), ~~ \text{ for } ~~ {\varvec{\theta }}_{2} \in ({\varvec{\theta }}_{0}, \widehat{\varvec{\theta }}_{n}), \\ \frac{{\mathbf {f}}_{k}(\widehat{\varvec{\theta }}_{n})-\mathbf { f}_{k}(\varvec{ \theta }_{0})}{\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}} &= \mathbf { f}^{\prime }_{k}({\varvec{\theta }}_{3})^{\top }, ~~ \text{ for } ~~ {\varvec{\theta }}_{3} \in ({\varvec{\theta }}_{0}, \widehat{\varvec{\theta }}_{n}), \\ {\mathbf {f}}_{k}^{\prime }({\varvec{\theta }}_{3}) &= \mathbf { f}_{k}^{\prime }(\varvec{ \theta }_{0})+\mathbf { f}^{\prime \prime }_{k}({\varvec{\theta }}_{4})({{\varvec{\theta }}_{3}-{\varvec{\theta }}_{0}}), ~~ \text{ for } ~~ {\varvec{\theta }}_{4} \in ({\varvec{\theta }}_{0}, {\varvec{\theta }}_{3}). \end{aligned}$$

We infer that we have

$$\begin{aligned}&\sum _{k=1}^{n}\mathbf { f}^{\prime }_{k}(\widehat{\varvec{\theta }}_{n})\big (\mathbf { f}_{k}(\widehat{\varvec{\theta }}_{n})-\mathbf { f}_{k}(\varvec{ \theta }_{0})\big )\\&\qquad = \sum _{k=1}^{n}\left[ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})+\mathbf { f}^{\prime \prime }_{k}({\varvec{\theta }}_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{\theta }_{0})\right] \left[ \mathbf { f}_{k}^{\prime }({\varvec{\theta }}_{3})^{\top }(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})\right] \\&\qquad = \sum _{k=1}^{n}\left[ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})+\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{\theta }_{0})\right] \left[ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})+\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{4})(\varvec{\theta }_{3}-\varvec{\theta }_{0})\right] ^{\top }(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) \\&\qquad = \sum _{k=1}^{n}\left[ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})+{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{\theta }_{0})\right] \left[ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})^{\top }+( \varvec{\theta }_{3}-\varvec{\theta }_{0})^{\top }{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_{4})^{\top }\right] (\widehat{\varvec{\theta }}_{n}-\varvec{\theta }_{0}) \\&\qquad = \left[ \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }+\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }\right. \\&\left. \qquad +\,\sum _{k=1}^{n}\mathbf { f}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }+\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }\right] (\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}). \end{aligned}$$

It follows that

$$\begin{aligned} \sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime }_{k}(\varvec{\theta }_{0})= & {} \left[ \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }+\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }\right. \\&\quad +\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }+\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }\\&\left. \quad -\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{0})+\sum _{k=1}^{n}\eta _{k}\big (\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{1})-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{0})\big )\right] (\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}). \end{aligned}$$

By multiplying by \(\varvec{\Lambda }_{n}^{1/2}\), we obtain

$$\begin{aligned} \sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime }_{k}(\varvec{\theta }_{0})\, = & {} \varvec{\Lambda }_{n}^{1/2}\Big [\varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }\varvec{\Lambda }_{n}^{-1/2}\\&\quad + \varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }\varvec{\Lambda }_{n}^{-1/2}\\&\quad +\varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }\varvec{\Lambda }_{n}^{-1/2} \\&\quad + \varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4})^{\top }\varvec{\Lambda }_{n}^{-1/2}\\&\quad -\varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_{0})\varvec{\Lambda }_{n}^{-1/2}\\&\quad + \varvec{\Lambda }_{n}^{-1/2}\sum _{k=1}^{n}\eta _{k}\big (\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{1})-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_{0})\big )\varvec{\Lambda }_{n}^{-1/2}\Big ]\varvec{\Lambda }_{n}^{1/2}(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}). \end{aligned}$$

Then, we evaluate the preceding terms one by one, for all large n. It will be shown that all terms tend to 0 in probability, except for the first one, which appears in the central limit theorem, (Eq. 6.9). We work with suitably chosen Hilbert spaces H and by making use of martingale central limit theorems and probability bounds for H-valued martingales. Recall that \(\{\eta _k, {\mathscr {F}}_{k-1}, k\ge 1 \}\) are martingale differences and \(\{\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\}\) are \({\mathscr {F}}_{k-1}\) measurable H-valued random variables. The last term of the previous equation gives

$$\begin{aligned}&\Big |\Big | \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n}\eta _{k}({\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_{1})-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_{0})\varvec{\Lambda }_{n}^{-1/2}\Big |\Big |^2 \\&\quad \le \Big |\Big | \varvec{\Lambda }_{n}^{-1/2}\Big |\Big |^4 p^2 \sup _{\varvec{\theta }\in {\varvec{B}}(\varvec{\theta }_0)}\Big |\sum _{k=1}^n \eta _k\{{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\}\Big | ^2 \\&\quad \le \Big |\Big | \varvec{\Lambda }_{n}^{-1/2}\Big |\Big |^4 p^2 vol({\varvec{B}}) \Big |\Big |\sum _{k=1}^n \eta _k\{{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \Big |\Big |^2 \\&\quad \le p^2 \sum _{m=1}^p \sum _{{\varvec{j}}\in {\varvec{J}}(m, p)}O_{{\mathbb {P}}}\left( \sum _{k=1}^n \int _{{\varvec{B}}(\varvec{\theta }_0)} [\varvec{D_j}({\mathbf {f}}^{\prime \prime }_{k})]^2 d {\theta }_{j_1} \cdots d {\theta }_{j_m}\right) /\lambda _{\min }^4(\varvec{\Lambda }_{n}^{1/2}) \\&\quad {\mathop {\rightarrow }\limits ^{{\mathbb {P}}}} 0, \end{aligned}$$

with

$$\begin{aligned} \Big |\Big | \varvec{\Lambda }_{n}^{-1/2}\Big |\Big | = 1 / \lambda _{\min }(\varvec{\Lambda }_{n}^{1/2}), \end{aligned}$$

where we have used assumption C.4 for the last equation.

In more detail now, it is important to see here that the sup-norm of

$$\begin{aligned} \sum _{k=1}^n \eta _k\{\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \end{aligned}$$

gets bounded by its H-norm, with the use of Cauchy-Schwarz inequality. Since

$$\begin{aligned} {\mathbf {f}}_{k}(\varvec{\theta })-\mathbf { f}_{k}(\varvec{\theta }_0)=\sum _{m=1}^p \sum _{{\varvec{j}} \in {\varvec{J}}(m, p)} \int _{{\theta }_{0_{j_m}}}^{{\theta }_{j_m}} \cdots \int _{{\theta }_{0_{j_1}}}^{{\theta }_{j_1}} \varvec{D_j}{\mathbf {f}}_k d {\theta }_{j_1} \cdots d {\theta }_{j_m}, \end{aligned}$$

this implies

$$\begin{aligned} {\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)=\sum _{m=1}^p \sum _{{\varvec{j}} \in {\varvec{J}}(m, p)} \int _{{\theta }_{0_{j_m}}}^{{\theta }_{j_m}} \cdots \int _{{\theta }_{0_{j_1}}}^{{\theta }_{j_1}} \varvec{D_j}{\mathbf {f}}^{\prime \prime }_k d {\theta }_{j_1} \cdots d {\theta }_{j_m}. \end{aligned}$$

Making use of the Cauchy–Schwarz inequality, we infer that

$$\begin{aligned}&\sup _{\varvec{\theta }\in {\varvec{B}}(\varvec{\theta }_0)}\Big |\sum _{k=1}^n \eta _k\{{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\}\Big | ^2\\&\quad = \sup _{\varvec{\theta }\in {\varvec{B}}(\varvec{\theta }_0)}\Big |\sum _{m=1}^p \sum _{{\varvec{j}}} \int _{{\theta }_{0_{j_m}}}^{{\theta }_{j_m}} \cdots \int _{{\theta }_{0_{j_1}}}^{{\theta }_{j_1}}\sum _{k=1}^n \eta _k \varvec{D_j}{\mathbf {f}}^{\prime \prime }_k d {\theta }_{j_1} \cdots d {\theta }_{j_m}\Big | ^2 \\&\quad \le \left\{ \int _{{\theta }_{0_{j_m}}}^{{\theta }_{j_m}} \cdots \int _{{\theta }_{0_{j_1}}}^{{\theta }_{j_1}} d {\theta }_{j_1} \cdots d {\theta }_{j_m} \right\} \\&\qquad \times \left\{ \int _{{\theta }_{0_{j_m}}}^{{\theta }_{j_m}} \cdots \int _{{\theta }_{0_{j_1}}}^{{\theta }_{j_1}} \left[ \sum _{k=1}^n \eta _k \varvec{D_j}{\mathbf {f}}^{\prime \prime }_k \right] ^2 d {\theta }_{j_1} \cdots d {\theta }_{j_m}\right\} \\&\quad =vol({\varvec{B}}\big (\varvec{\theta }_{0})\big ) \left\| \sum _{k=1}^n \eta _k\{\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \right\| ^2_{H}. \end{aligned}$$

We obtain

$$\begin{aligned} \left\| \sum _{k=1}^n \eta _k\{\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \right\| ^2_{H}= O_{{\mathbb {P}}}\left( \sum _{k=1}^n {\mathbb {E}}\Big ( \left\| \eta _k\{\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \right\| ^2_{H} | {\mathscr {F}}_{k-1} \Big )\right) . \end{aligned}$$
(A.12)

For nonrandom constant \(c_k\) sufficiently large so that

$$\begin{aligned} {\mathbb {P}}\Big \{ \left\| \mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0) \right\| ^2_{H}+ {\mathbb {E}}(\eta _k^2| {\mathscr {F}}_{k-1} ) > c_k \Big \} \le k^{-2}. \end{aligned}$$

By Borel-Cantelli lemma, we have

$$\begin{aligned}&{\mathbb {P}}\Big (\eta _k\{{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\}\mathbb {1} \Big \{ \left\| {\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0) \right\| ^2_{H} \le c_k ~ \text{ and } {\mathbb {E}}( \eta _k^2| {\mathscr {F}}_{k-1} )\le c_k \Big \} \nonumber \\&\quad =\eta _k\{\mathbf {f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf {f}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \text{ for } \text{ large } k \Big )=1. \end{aligned}$$
(A.13)

We readily infer that

$$\begin{aligned} {\mathbb {P}}\Big ( \left\| \mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0) \right\| ^2_{H}\le c_k \text{ and } {\mathbb {E}}( \eta _k^2| {\mathscr {F}}_{k-1} )\le c_k \Big ) = {\mathbb {P}}\Big ( \left\| \mathbf { f}^{\prime \prime }_{k}(\varvec{\theta })-\mathbf { f}^{\prime \prime }_{k}(\varvec{\theta }_0 \right\| ^2 \le c_k\Big ), \end{aligned}$$

provided that

$$\begin{aligned} \sup _k{{\mathbb {E}}(\eta _k^2|{\mathcal {F}}_{k-1}) }< \infty ~~ a.s. \end{aligned}$$

Hence, from (A.12) and (A.13), it follows

$$\begin{aligned} \left\| \sum _{k=1}^n \eta _k\{{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta })-{\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\} \right\| ^2 =O_{{\mathbb {P}}}\left( \sum _{k=1}^n \int _{(\varvec{\theta }_0 ; {\varvec{j}})} [\varvec{D_j}({\mathbf {f}}^{\prime \prime }_{k})]^2 d {\theta }_{j_1} \cdots d {\theta }_{j_m}\right) . \end{aligned}$$

Again, an application of Cauchy-Schwarz inequality gives

$$\begin{aligned} \left\| \varvec{\Lambda }_{n}^{-1/2} {\sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime \prime }_{k}({\varvec{\theta }}_{0})}\varvec{\Lambda }_{n}^{-1/2} \right\| ^2 &\le \left\| \varvec{\Lambda }_{n}^{-1/2} \right\| ^4 \left\| {\sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime \prime }_{k}({\varvec{\theta }}_{0})} \right\| ^2 \\ &= \left\| \varvec{\Lambda }_{n}^{-1/2} \right\| ^4 O_{{\mathbb {P}}}\Big (\sum _{k=1}^n {\mathbf {E}}\Big (\left\| \eta _k\ {\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\right\| ^2 | {\mathscr {F}}_{k-1}\Big )\Big ) \\ &= \Big |\Big | \varvec{\Lambda }_{n}^{-1/2}\Big |\Big |^4 O_{{\mathbb {P}}\mathbb {}}\Big (\sum _{k=1}^n \left\| {\mathbf {f}}^{\prime \prime }_{k}(\varvec{\theta }_0)\right\| ^2 \Big ) \\ &\le \lambda _{min}^2(\varvec{\Lambda }_{n}^{1/2})/ \lambda _{min}^4(\varvec{\Lambda }_{n}^{1/2}) {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0. \end{aligned}$$

We now treat the following term

$$\begin{aligned}&\left\| \varvec{\Lambda }_{n}^{-1/2} {\sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})({\varvec{\theta }}_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top }}\varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \\&\quad \le \left\| \sum _{k=1}^{n}\left[ \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right] \left[ {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) (\varvec{\theta }_{3}-\varvec{ \theta }_{0})\right] ^{\top }\right\| ^2 \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \\&\quad \text{ from } \text{ Cauchy{-}Schwarz } \text{ inequality }\\&\quad \le \sum _{k=1}^{n} \left\| \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right\| ^2 \sum _{k=1}^{n} \left\| {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ({\varvec{\theta }}_{3}-\varvec{ \theta }_{0})\right\| ^2 \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \\&\quad \le \sum _{k=1}^{n} \left\| \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right\| ^2 \sum _{k=1}^{n} \Big \{p^2 \max \Big |{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) \Big |^2 \left\| (\varvec{\theta }_{3}-\varvec{ \theta }_{0}) \right\| ^2 \Big \} \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0. \end{aligned}$$

Let us treat the following term,

$$\begin{aligned}&\left\| \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }\varvec{\Lambda }_{n}^{-1/2}\right\| ^2\\&\quad \le \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \left\| \sum _{k=1}^{n} \left[ {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2}) (\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}\right] \left[ \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right] ^{\top }\right\| ^2 \\&\quad ~~~~\text{(applying } \text{ Cauchy{-}Schwarz } \text{ inequality) }\\&\quad \le \sum _{k=1}^{n} \left\| \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right\| ^2 \sum _{k=1}^{n} \left\| {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ({\varvec{\theta }}_{3}-\varvec{ \theta }_{0})\right\| ^2 \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \\&\quad \le \sum _{k=1}^{n} \left\| \varvec{\Lambda }_{n}^{-1/2} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right\| ^2 \sum _{k=1}^{n} \Big \{p^2 \max \Big |{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2}) \Big |^2 \left\| ({\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})}\right\| ^2\Big \} \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0. \end{aligned}$$

Once more, an application of Cauchy–Schwarz inequality implies that

$$\begin{aligned}&\left\| \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n}\mathbf { f}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top } \varvec{\Lambda }_{n}^{-1/2}\right\| ^2 \\&\quad \le \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^{4} \left\| \sum _{k=1}^{n}{\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})(\varvec{\theta }_{3}-\varvec{ \theta }_{0})^{\top } {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) ^{\top } \right\| ^2 \\&\quad \le \left\| \varvec{\Lambda }_{n}^{-1/2}\right\| ^{4} \sum _{k=1}^{n}\left\| {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{2})(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})\right\| ^2 \sum _{k=1}^{n} \left\| {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) (\varvec{\theta }_{3}-\varvec{ \theta }_{0}) \right\| ^2 \\&\quad \le \left\| \varvec{\Lambda }_{n}^{-1/2} \right\| ^{4} \sum _{k=1}^{n} \Big \{p^2 \max \Big | \mathbf { f}_{k}^{\prime \prime }(\varvec{ \theta }_{2}) \Big |^2 \left\| ({\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0})}\right\| ^2\Big \}\\&\qquad \times \sum _{k=1}^{n} \Big \{ p^2 \max \Big | \mathbf { f}_{k}^{\prime \prime }(\varvec{ \theta }_{4}) \Big |^2 \left\| (\varvec{\theta }_{3}-\varvec{ \theta }_{0}) \right\| ^2\Big \} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0. \end{aligned}$$

Making use of the Assumption B.1.iii, we infer that

$$\begin{aligned} \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n} \left\{ {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top } \right\} ^{1/2} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} {\mathbf {I}}. \end{aligned}$$

So, it comes out from the above that

$$\begin{aligned} \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n}\eta _{k}{\mathbf {f}}^{\prime }_{k}(\varvec{\theta }_{0})= O_{{\mathbb {P}}}(1) \left\{ \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top }\right\} ^{1/2}\varvec{\Lambda }_{n}^{-1/2}\varvec{\Lambda }_{n}^{1/2}(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}), \end{aligned}$$

or

$$\begin{aligned} \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n}\eta _{k}\mathbf { f}^{\prime }_{k}(\varvec{\theta }_{0})=O_{{\mathbb {P}}}(1) \left\{ \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top } \right\} ^{1/2}(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}). \end{aligned}$$
(A.14)

Following Theorem 2 of [36], the martingale central limit theorem of [26] (in chaptrer 3 and chapter 6) is applied, since the following conditions are satisfied:

1.4 Statement D

  1. 1.

    \(\max \left\| \varvec{\Lambda }_{n}^{-1/2}\eta _k\mathbf { f}^{\prime }_{k}(\varvec{\theta }_{0}) \right\| {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0\) , and

    $$\begin{aligned} {\left\{ \begin{array}{ll} \max \left\| \varvec{\Lambda }_{n}^{-1/2}{\mathbf {f}}^{\prime }_{k}(\varvec{\theta }_{0}) \right\| {\mathop \rightarrow \limits ^{{\mathbb {P}}}} 0, \qquad \text{ from } \text{ assumption } \text{ C.2 } \\ \qquad \text{ and }\\ \sup _k {\mathbb {E}} \big (\eta _k^2 | {\mathscr {F}}_{k-1}\big ) < \infty .\\ \end{array}\right. } \end{aligned}$$
  2. 2.

    Also

    $$\begin{aligned} \sum _{k=1}^{n} \varvec{\Lambda }_{n}^{-1/2} \eta _k {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) \sum _{k=1}^{n} \left[ \varvec{\Lambda }_{n}^{-1/2} \eta _k {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right] ^{\top } {\mathop \rightarrow \limits ^{{\mathbb {P}}}} \sigma ^2 \varvec{\Lambda }_{n}^{-1/2} \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})^{\top } {\varvec{\Lambda }_{n}^{-1/2}}^{\top } . \end{aligned}$$

    By using the fact that

    $$\begin{aligned} \varvec{\Lambda }_{n}^{-1/2}\left( \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})^{\top }\right) ^{1/2} {\mathop \rightarrow \limits ^{{\mathbb {P}}}} {\mathbf {I}}, \end{aligned}$$

    we readily obtain that

    $$\begin{aligned} \sum _{k=1}^{n} \varvec{\Lambda }_{n}^{-1/2} \eta _k {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) \sum _{k=1}^{n} \left[ \varvec{\Lambda }_{n}^{-1/2} \eta _k {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})\right] ^{\top }{\mathop {\rightarrow }\limits ^{{\mathbb {P}}}} \sigma ^2 {\mathbf {I}}. \end{aligned}$$

Consequently, by the preceding conditions and making use of (A.14), we have, as \(n\rightarrow \infty\)

$$\begin{aligned} \left\{ \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) ^{\top } \right\} ^{1/2}(\widehat{\varvec{\theta }}_{n}-\varvec{ \theta }_{0}) {\mathop {\rightarrow }\limits ^{d}} {\mathcal {N}} (0, \sigma ^2 {\mathbf {I}}). \end{aligned}$$

Now, as in [36], we may consider the assumption that

$$\begin{aligned} \frac{1}{2n}{\mathbf {S}}_n^{\prime \prime }(\varvec{\theta }_0) {\rightarrow } {\mathbf {V}}, ~~~\text{ a.s.}, \end{aligned}$$
(A.15)

where \({\mathbf {V}}\) a positive definite and nonrandom matrix. Because

$$\begin{aligned} \frac{1}{2} {\mathbf {S}}_n^{\prime \prime }(\varvec{\theta }_0)=- \sum _{k=1}^{n} \eta _k {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{0})+\sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})^{\top }, \end{aligned}$$

and by the martingale strong law [38], we infer that

$$\begin{aligned} \sum _{k=1}^{n} \eta _k {\mathbf {f}}_{k}^{\prime \prime }(\varvec{ \theta }_{0})= o \left( \sum _{k=1}^{n} \left\| \mathbf { f}_{k}^{\prime \prime }(\varvec{ \theta }_{0})\right\| ^2 \right) + O(1) \quad \text{ a.s.} \end{aligned}$$
(A.16)

So, the assumption (A.15) becomes simpler:

$$\begin{aligned} 1/n \sum _{k=1}^{n} {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}) {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0})^{\top } {\rightarrow } {\mathbf {V}} \end{aligned}$$
(A.17)

Similarly to [26] (Chapter 6), we have as \(n\rightarrow \infty\),

$$\begin{aligned} -\frac{1}{{\sqrt{n}}}\sum _{k=1}^{n} \eta _k {\mathbf {f}}_{k}^{\prime }(\varvec{ \theta }_{0}){\mathop \rightarrow \limits ^{d}} {\mathcal {N}} (0,\sigma ^2 {\mathbf {V}}), \end{aligned}$$

we then obtain, as \(n\rightarrow \infty\),

$$\begin{aligned} {\sqrt{n}}(\widehat{\varvec{\theta }}_{n}-{\varvec{\theta }}_{0}){\mathop \rightarrow \limits ^{d}}{\mathcal {N}} (0,\sigma ^2 \mathbf {V^{-1}}). \end{aligned}$$

Thus the proof is complete. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Papamichail, C., Bouzebda, S. & Limnios, N. Regression Analysis of Stochastic Fatigue Crack Growth Model in a Martingale Difference Framework. J Stat Theory Pract 14, 44 (2020). https://doi.org/10.1007/s42519-020-00110-x

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s42519-020-00110-x

Keywords

Mathematics Subject Classification

Navigation