Skip to main content

Time Series Prediction via Aggregation: An Oracle Bound Including Numerical Cost

  • Conference paper

Part of the book series: Lecture Notes in Statistics ((LNSP,volume 217))

Abstract

We address the problem of forecasting a time series meeting the Causal Bernoulli Shift model, using a parametric set of predictors. The aggregation technique provides a predictor with well established and quite satisfying theoretical properties expressed by an oracle inequality for the prediction risk. The numerical computation of the aggregated predictor usually relies on a Markov chain Monte Carlo method whose convergence should be evaluated. In particular, it is crucial to bound the number of simulations needed to achieve a numerical precision of the same order as the prediction risk. In this direction we present a fairly general result which can be seen as an oracle inequality including the numerical cost of the predictor computation. The numerical cost appears by letting the oracle inequality depend on the number of simulations required in the Monte Carlo approximation. Some numerical experiments are then carried out to support our findings.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alquier, P., & Li, X. (2012). Prediction of quantiles by statistical learning and application to GDP forecasting. In J.-G. Ganascia, P. Lenca, & J.-M. Petit (Eds.), Discovery science (Volume 7569 of Lecture notes in computer science, pp. 22–36). Berlin/Heidelberg: Springer.

    Google Scholar 

  2. Alquier, P., & Wintenberger, O. (2012). Model selection for weakly dependent time series forecasting. Bernoulli, 18(3), 883–913.

    Article  MATH  MathSciNet  Google Scholar 

  3. Andrieu, C., & Doucet, A. (1999). An improved method for uniform simulation of stable minimum phase real ARMA (p,q) processes. IEEE Signal Processing Letters, 6(6), 142–144.

    Article  Google Scholar 

  4. Atchadé, Y. F. (2006). An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift. Methodology and Computing in Applied Probability, 8(2), 235–254.

    Article  MATH  MathSciNet  Google Scholar 

  5. Audibert, J.-Y. (2004). PAC-bayesian statistical learning theory. PhD thesis, Université Pierre et Marie Curie-Paris VI.

    Google Scholar 

  6. Beadle, E. R., & Djurić, P. M. (1999). Uniform random parameter generation of stable minimum-phase real ARMA (p,q) processes. IEEE Signal Processing Letters, 4(9), 259–261.

    Article  Google Scholar 

  7. Brockwell, P. J., & Davis, R. A. (2006). Time series: Theory and methods (Springer series in statistics). New York: Springer. Reprint of the second (1991) edition.

    Google Scholar 

  8. Catoni, O. (2004). Statistical learning theory and stochastic optimization (Volume 1851 of Lecture notes in mathematics). Berlin: Springer. Lecture notes from the 31st Summer School on Probability Theory held in Saint-Flour, 8–25 July 2001.

    Google Scholar 

  9. Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, learning, and games. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  10. Coulon-Prieur, C., & Doukhan, P. (2000). A triangular central limit theorem under a new weak dependence condition. Statistics and Probability Letters, 47(1), 61–68.

    Article  MATH  MathSciNet  Google Scholar 

  11. Dalalyan, A. S., & Tsybakov, A. B. (2008). Aggregation by exponential weighting, sharp PAC-bayesian bounds and sparsity. Machine Learning, 72(1–2), 39–61.

    Article  Google Scholar 

  12. Dedecker, J., Doukhan, P., Lang, G., León R, J. R., Louhichi, S., & Prieur, C. (2007). Weak dependence: With examples and applications (Volume 190 of Lecture notes in statistics). New York: Springer.

    Google Scholar 

  13. Dedecker, J., & Prieur, C. (2005). New dependence coefficients. Examples and applications to statistics. Probability Theory and Related Fields, 132(2), 203–236.

    Article  MATH  MathSciNet  Google Scholar 

  14. Künsch, H. R. (1995). A note on causal solutions for locally stationary AR-processes. Note from ETH Zürich, available on line here: ftp://ftp.stat.math.ethz.ch/U/hkuensch/localstat-ar.pdf.

  15. Łatuszyński, K., Miasojedow, B., & Niemiro, W. (2013). Nonasymptotic bounds on the estimation error of MCMC algorithms. Bernoulli, 19, 2033–2066.

    Article  MATH  MathSciNet  Google Scholar 

  16. Łatuszyński, K., & Niemiro, W. (2011). Rigorous confidence bounds for MCMC under a geometric drift condition. Journal of Complexity, 27(1), 23–38.

    Article  MATH  MathSciNet  Google Scholar 

  17. Leung, G., & Barron, A. R. (2006). Information theory and mixing least-squares regressions. IEEE Transactions on Information Theory, 52(8), 3396–3410.

    Article  MATH  MathSciNet  Google Scholar 

  18. Mengersen, K. L., & Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. The Annals of Statistics, 24(1), 101–121.

    Article  MATH  MathSciNet  Google Scholar 

  19. Moulines, E., Priouret, P., & Roueff, F. (2005). On recursive estimation for time varying autoregressive processes. The Annals of Statistics, 33(6), 2610–2654.

    Article  MATH  MathSciNet  Google Scholar 

  20. Rio, E. (2000). Inégalités de Hoeffding pour les fonctions lipschitziennes de suites dépendantes. Comptes Rendus de l’Academie des Sciences Paris Series I Mathematics, 330(10), 905–908.

    MATH  MathSciNet  Google Scholar 

  21. Roberts, G. O., & Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Probability Surveys, 1, 20–71.

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is specially thankful to François Roueff, Christophe Giraud, Peter Weyer-Brown and the two referees for their extremely careful readings and highly pertinent remarks which substantially improved the paper. This work has been partially supported by the Conseil régional d’Île-de-France under a doctoral allowance of its program Réseau de Recherche Doctoral en Mathématiques de l’Île de France (RDM-IdF) for the period 2012–2015 and by the Labex LMH (ANR-11-IDEX-003-02).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andres Sanchez-Perez .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Sanchez-Perez, A. (2015). Time Series Prediction via Aggregation: An Oracle Bound Including Numerical Cost. In: Antoniadis, A., Poggi, JM., Brossat, X. (eds) Modeling and Stochastic Learning for Forecasting in High Dimensions. Lecture Notes in Statistics(), vol 217. Springer, Cham. https://doi.org/10.1007/978-3-319-18732-7_13

Download citation

Publish with us

Policies and ethics