Skip to main content

Asymptotic and Bootstrap Tests for a Change in Autoregression Omitting Variability Estimation

  • Conference paper
  • First Online:
Time Series Analysis and Forecasting (ITISE 2017)

Part of the book series: Contributions to Statistics ((CONTRIB.STAT.))

Included in the following conference series:

Abstract

A sequence of time-ordered observations follows an autoregressive model of order one and its parameter is possibly subject to change at most once at some unknown time point. The aim is to test whether such an unknown change has occurred or not. A change-point method presented here rely on a ratio type test statistic based on the maxima of cumulative sums. The main advantage of the developed approach is that the variance of the observations neither has to be known nor estimated. Asymptotic distribution of the test statistic under the no-change null hypothesis is derived. Moreover, we prove the consistency of the test under the alternative. A bootstrap procedure is proposed in the way of a completely data-driven technique without any tuning parameters. The results are illustrated through a simulation study, which demonstrates the computational efficiency of the procedure. A practical application to real data is presented as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hušková, M., Prášková, Z., Steinebach, J.: On the detection of changes in autoregressive time series I. Asymptotics. J. Stat. Plan. Infer. 137(4), 1243–1259 (2007)

    Article  MathSciNet  Google Scholar 

  2. Hušková, M., Kirch, C., Prášková, Z., Steinebach, J.: On the detection of changes in autoregressive time series, II. Resampling procedures. J. Stat. Plan. Infer. 138(6), 1697–1721 (2008)

    Article  MathSciNet  Google Scholar 

  3. Horváth, L., Horváth, Z., Hušková, M.: Ratio tests for change point detection. In: Balakrishnan, N., Peña, E.A., Silvapulle, M.J. (eds.) Beyond Parametrics in Interdisciplinary Research: Festschrift in Honor of Professor Pranab K. Sen, vol. 1, pp. 293–304. IMS Collections, Beachwood, Ohio (2009)

    Chapter  Google Scholar 

  4. Peštová, B., Pešta, M.: Testing structural changes in panel data with small fixed panel size and bootstrap. Metrika 78(6), 665–689 (2015)

    Article  MathSciNet  Google Scholar 

  5. Peštová, B., Pešta, M.: Change point estimation in panel data without boundary issue. Risks 5(1), 7 (2017)

    Article  Google Scholar 

  6. Prague Stock Exchange: PX Index 2015. https://www.pse.cz/dokument.aspx?k=Burzovni-Indexy. Updated April 30, 2015; Accessed April 30, 2015

  7. Davidson, J.: Stochastic Limit Theory: An Introduction for Econometricians. Oxford University Press, New York (1994)

    Book  Google Scholar 

Download references

Acknowledgements

Institutional support to Barbora Peštová was provided by RVO:67985807. Michal Pešta was supported by the Czech Science Foundation project No. 18-01781Y.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michal Pešta .

Editor information

Editors and Affiliations

Appendix: Proofs

Appendix: Proofs

Proof

(of Theorem 1) Let us consider an array

$$ U_{n,i}=\frac{\sqrt{1-\beta ^2}}{\sigma ^2\sqrt{n-1}}Y_{i-1}\varepsilon _i,\quad i=2,\ldots ,n $$

and a filtration \(\mathcal {F}_{n,i}=\sigma \{\varepsilon _j,\,j\le i\}\), \(i=2,\ldots ,n\) and \(n\in \mathbb {N}\). Then, \(\{U_{n,i},\mathcal {F}_{n,i}\}\) is a martingale difference array such that

$$ {\mathsf {E}}U_{n,i}^2=\frac{1-\beta ^2}{\sigma ^4(n-1)}{\mathsf {E}}Y_{i-1}^2\varepsilon _i^2=\frac{1}{n-1}. $$

Moreover,

$$ \sum _{i=2}^nU_{n,i}^2-\sum _{i=2}^n{\mathsf {E}}U_{n,i}^2=\frac{1-\beta ^2}{\sigma ^4(n-1)}\sum _{i=2}^n(Y_{i-1}^2\varepsilon _i^2-{\mathsf {E}}Y_{i-1}^2\varepsilon _i^2). $$

Furthermore,

$$\begin{aligned} \frac{1}{n-1}\sum _{i=2}^n(Y_{i-1}^2\varepsilon _i^2&-{\mathsf {E}}Y_{i-1}^2\varepsilon _i^2)\\&=\frac{1}{n-1}\sum _{i=2}^n[Y_{i-1}^2(\varepsilon _i^2-\sigma ^2)]+\frac{1}{n-1}\sum _{i=2}^n(Y_{i-1}^2-{\mathsf {E}}Y_{i-1}^2)\sigma ^2. \end{aligned}$$

Since \(\{Y_{i-1}^2(\varepsilon _i^2-\sigma ^2)\}\) is a martingale difference array again with respect to \(\mathcal {F}_{n,i}\), we have under Assumption 3 from the Chebyshev’s inequality that

$$ \frac{1}{n-1}\sum _{i=2}^n[Y_{i-1}^2(\varepsilon _i^2-\sigma ^2)]\xrightarrow [n\rightarrow \infty ]{\mathsf {P}}0. $$

Similarly, as a consequence of Lemma 4.2 in [1],

$$ \frac{1}{n-1}\sum _{i=2}^n(Y_{i-1}^2-{\mathsf {E}}Y_{i-1}^2)\xrightarrow [n\rightarrow \infty ]{\mathsf {P}}0. $$

Thus,

$$\begin{aligned} \sum _{i=2}^n U_{n,i}^2\xrightarrow [n\rightarrow \infty ]{\mathsf {P}}1. \end{aligned}$$
(11)

Next, for any \(\epsilon >0\),

$$\begin{aligned} \mathsf {P}\left( \max _{2\le i\le n}U_{n,i}^2>\epsilon \right)&\le \sum _{i=2}^n\mathsf {P}\left( \frac{1-\beta ^2}{\sigma ^4(n-1)}Y_{i-1}^2\varepsilon _i^2>\epsilon \right) \nonumber \\&\le \frac{(1-\beta ^2)^2}{\epsilon ^2\sigma ^8(n-1)^2}\sum _{i=2}^n{\mathsf {E}}Y_{i-1}^4{\mathsf {E}}\varepsilon _i^4\xrightarrow [n\rightarrow \infty ]{}0. \end{aligned}$$
(12)

Additionally,

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{i=2}^{[nt]}{\mathsf {E}}U_{n,i}^2=\lim _{n\rightarrow \infty }\frac{[nt]-1}{n-1}=t \end{aligned}$$
(13)

for all \(t\in [0,1]\).

According to Theorem 27.14 from [7] for the martingale difference array \(\{U_{n,i}, \mathcal {F}_{n,i}\}\), where the assumptions of this theorem are satisfied due to (11), (12), and (13), we get

$$\begin{aligned} \sum _{i=2}^{[nt]}U_{n,i}\xrightarrow [n\rightarrow \infty ]{\mathscr {D}[0,1]}\mathcal {W}(t). \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{1}{\sqrt{n-1}}\left( \sum _{i=2}^{[nt]}Y_{i-1}\varepsilon _i,\sum _{i=[nt]+2}^{n}Y_{i-1}\varepsilon _i\right) \xrightarrow [n\rightarrow \infty ]{\mathscr {D}^2[0,1]}\frac{\sigma ^2}{\sqrt{1-\beta ^2}}\left( \mathcal {W}(t),\widetilde{\mathcal {W}}(t)\right) , \end{aligned}$$
(14)

where \(\widetilde{\mathcal {W}}(t)=\mathcal {W}(1)-\mathcal {W}(t)\).

Let us define

$$ \mathbf{Y}_{j,l}=(Y_j,\ldots ,Y_l)^{\top }\quad \text{ and }\quad {\varvec{\varepsilon }}_{j,l}=(\varepsilon _j,\ldots ,\varepsilon _l)^{\top }. $$

Hence, for the expression in the numerator of \(\mathcal {V}_n\), it holds

$$\begin{aligned}&\sum _{j=1}^{i-1}Y_{j}(Y_{j+1}-\widehat{\beta }_{1k}Y_j)=\mathbf{Y}_{1,i-1}^{\top }\left( \mathbf{Y}_{2,i}-\mathbf{Y}_{1,i-1}\widehat{\beta }_{1k}\right) \nonumber \\&=\mathbf{Y}_{1,i-1}^{\top }\left( \mathbf{Y}_{1,i-1}\beta +{\varvec{\varepsilon }}_{2,i}-\mathbf{Y}_{1,i-1}\beta -\mathbf{Y}_{1,i-1}\left( \mathbf{Y}_{1,k-1}^{\top }{} \mathbf{Y}_{1,k-1}\right) ^{-1}{} \mathbf{Y}_{1,k-1}^{\top }{\varvec{\varepsilon }}_{2,k}\right) \nonumber \\&=\mathbf{Y}_{1,i-1}^{\top }{\varvec{\varepsilon }}_{2,i}-\mathbf{Y}_{1,i-1}^{\top }{} \mathbf{Y}_{1,i-1}\left( \mathbf{Y}_{1,k-1}^{\top }{} \mathbf{Y}_{1,k-1}\right) ^{-1}{} \mathbf{Y}_{1,k-1}^{\top }{\varvec{\varepsilon }}_{2,k}. \end{aligned}$$
(15)

Similarly, for the expression in the denominator of \(\mathcal {V}_n\),

$$\begin{aligned}&\sum _{j=i}^{n-1}Y_{j}(Y_{j+1}-\widehat{\beta }_{2k}Y_j)\nonumber \\&=\mathbf{Y}_{i,n-1}^{\top }{\varvec{\varepsilon }}_{i+1,n}-\mathbf{Y}_{i,n-1}^{\top }{} \mathbf{Y}_{i,n-1}\left( \mathbf{Y}_{k+1,n-1}^{\top }{} \mathbf{Y}_{k+1,n-1}\right) ^{-1}{} \mathbf{Y}_{k+1,n-1}^{\top }{\varvec{\varepsilon }}_{k+2,n}. \end{aligned}$$
(16)

Lemma 4.2 in [1] gives

$$\begin{aligned} \sup _{\gamma \le t<1}\frac{1}{[nt]}\left| \sum _{s=1}^{[nt]}(Y_s^2-{\mathsf {E}}Y_s^2)\right| =o_{\mathsf {P}}(1) \end{aligned}$$
(17)

and

$$\begin{aligned} \sup _{0<t\le 1-\gamma }\frac{1}{[n(1-t)]}\left| \sum _{s=[nt]+1}^{n-1}(Y_s^2-{\mathsf {E}}Y_s^2)\right| =o_{\mathsf {P}}(1), \end{aligned}$$
(18)

as \(n\rightarrow \infty \), where [nt] and \([n(1-t)]\) mean truncated number to zero decimal digits. Finally, (14) together with (15), (16), (17), and (18) implies

$$\begin{aligned} \frac{1}{\sqrt{n-1}}&\left( \begin{array}{c} \underset{0\le u\le t}{\sup }\left| \sum \limits _{j=1}^{[nu]-1} Y_{j}(Y_{j+1}-\widehat{\beta }_{1[nt]}Y_j)\right| \\ \underset{t\le u\le 1}{\sup }\left| \sum \limits _{j=[nu]+1}^{n-1} Y_{j}(Y_{j+1}-\widehat{\beta }_{2[nt]}Y_j)\right| \end{array}\right) \\&\quad \xrightarrow [n\rightarrow \infty ]{\mathscr {D}^2[\gamma ,1-\gamma ]}\frac{\sigma ^2}{\sqrt{1-\beta ^2}}\left( \begin{array}{c} \sup \limits _{0\le u\le t}\left| \mathcal {W}(u)-u/t\mathcal {W}(t)\right| \\ \sup \limits _{t\le u\le 1}\left| \widetilde{\mathcal {W}}(u)-(1-u)/(1-t)\widetilde{\mathcal {W}}(t)\right| \end{array}\right) . \end{aligned}$$

Then, the assertion of the theorem directly follows.   \(\square \)

Proof

(of Theorem 2) Let us take \(k=\tau +2\), \(k=[\xi n]\) for some \(\zeta<\xi <1-\gamma \) and \(i=\tau +1\). Then,

$$\begin{aligned}&\sum _{j=1}^{\tau }Y_{j}(Y_{j+1}-\widehat{\beta }_{1(\tau +2)}Y_j)\\&=\mathbf{Y}_{1,\tau }^{\top }{\varvec{\varepsilon }}_{2,\tau +1}-\mathbf{Y}_{1,\tau }^{\top }{} \mathbf{Y}_{1,\tau }\left( \mathbf{Y}_{1,\tau +1}^{\top }{} \mathbf{Y}_{1,\tau +1}\right) ^{-1}{} \mathbf{Y}_{1,\tau +1}^{\top }{\varvec{\varepsilon }}_{2,\tau +2}-\mathbf{Y}_{1,\tau }^{\top }{} \mathbf{Y}_{1,\tau }\delta . \end{aligned}$$

According to the proof of Theorem 1, as \(n\rightarrow \infty \),

$$ \frac{1}{\sqrt{n-1}}\left( \mathbf{Y}_{1,\tau }^{\top }{\varvec{\varepsilon }}_{2,\tau +1}-\mathbf{Y}_{1,\tau }^{\top }{} \mathbf{Y}_{1,\tau }\left( \mathbf{Y}_{1,\tau +1}^{\top }{} \mathbf{Y}_{1,\tau +1}\right) ^{-1}{} \mathbf{Y}_{1,\tau +1}^{\top }{\varvec{\varepsilon }}_{2,\tau +2}\right) =\mathcal {O}_{\mathsf {P}}(1). $$

Lemma 4.2 from [1] gives

$$ \frac{1}{\sqrt{n-1}}\left| \mathbf{Y}_{1,\tau }^{\top }{} \mathbf{Y}_{1,\tau }\delta \right| \xrightarrow [n\rightarrow \infty ]{\mathsf {P}}\infty . $$

Now,

$$ \frac{1}{\sqrt{n-1}}\underset{2\le i \le k}{\max }\left| \sum _{j=1}^{i-1} Y_{j}(Y_{j+1}-\widehat{\beta }_{1k}Y_j)\right| \xrightarrow [n\rightarrow \infty ]{\mathsf {P}}\infty . $$

For \(\tau <k=[\xi n]\), the denominator in (4) divided by \(\sqrt{n-1}\) has the same distribution as under the null hypothesis and it is, therefore, bounded in probability. It follows that the maximum of the ratio has to tend in probability to infinity as well, while \(n\rightarrow \infty \).   \(\square \)

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peštová, B., Pešta, M. (2018). Asymptotic and Bootstrap Tests for a Change in Autoregression Omitting Variability Estimation. In: Rojas, I., Pomares, H., Valenzuela, O. (eds) Time Series Analysis and Forecasting. ITISE 2017. Contributions to Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-96944-2_13

Download citation

Publish with us

Policies and ethics