Advertisement

ARIMA Models

  • Robert H. Shumway
  • David S. Stoffer
Chapter
Part of the Springer Texts in Statistics book series (STS)

Abstract

Classical regression is often insufficient for explaining all of the interesting dynamics of a time series. For example, the ACF of the residuals of the simple linear regression fit to the price of chicken data (see Example 2.4) reveals additional structure in the data that regression did not capture. Instead, the introduction of correlation that may be generated through lagged linear relations leads to proposing the autoregressive (AR) and autoregressive moving average (ARMA) models that were presented in Whittle [209]. Adding nonstationary models to the mix leads to the autoregressive integrated moving average (ARIMA) model popularized in the landmark work by Box and Jenkins [30]. The Box–Jenkins method for identifying ARIMA models is given in this chapter along with techniques for parameter estimation and forecasting for these models. A partial theoretical justification of the use of ARMA models is discussed in Sect. B.4.

References

  1. [29]
    Box GEP, Pierce DA (1970) Distributions of residual autocorrelations in autoregressive integrated moving average models. J Am Stat Assoc 72:397–402MathSciNetzbMATHGoogle Scholar
  2. [30]
    Box GEP, Jenkins GM (1970) Time series analysis, forecasting, and control. Holden-Day, OaklandzbMATHGoogle Scholar
  3. [31]
    Box GEP, Jenkins GM, Reinsel GC (1994) Time series analysis, forecasting, and control, 3rd edn. Prentice Hall, Englewood CliffszbMATHGoogle Scholar
  4. [36]
    Brockwell PJ, Davis RA (1991) Time series: theory and methods, 2nd edn. Springer, New YorkCrossRefzbMATHGoogle Scholar
  5. [43]
    Cochrane D, Orcutt GH (1949) Applications of least squares regression to relationships containing autocorrelated errors. J Am Stat Assoc 44:32–61zbMATHGoogle Scholar
  6. [49]
    Davies N, Triggs CM, Newbold P (1977) Significance levels of the Box-Pierce portmanteau statistic in finite samples. Biometrika 64:517–522MathSciNetCrossRefzbMATHGoogle Scholar
  7. [54]
    Durbin J (1960) Estimation of parameters in time series regression models. J R Stat Soc B 22:139–153MathSciNetzbMATHGoogle Scholar
  8. [56]
    Efron B, Tibshirani R (1994) An introduction to the bootstrap. Chapman and Hall, New YorkzbMATHGoogle Scholar
  9. [66]
    Fuller WA (1996) Introduction to statistical time series, 2nd edn. Wiley, New YorkzbMATHGoogle Scholar
  10. [86]
    Hannan EJ (1970) Multiple time series. Wiley, New YorkCrossRefzbMATHGoogle Scholar
  11. [106]
    Johnson RA, Wichern DW (1992) Applied multivariate statistical analysis, 3rd edn. Prentice-Hall, Englewood CliffszbMATHGoogle Scholar
  12. [127]
    Levinson N (1947) The Wiener (root mean square) error criterion in filter design and prediction. J Math Phys 25:262–278MathSciNetGoogle Scholar
  13. [129]
    Ljung GM, Box GEP (1978) On a measure of lack of fit in time series models. Biometrika 65:297–303CrossRefzbMATHGoogle Scholar
  14. [137]
    McLeod AI (1978) On the distribution of residual autocorrelations in Box-Jenkins models. J R Stat Soc B 40:296–302MathSciNetzbMATHGoogle Scholar
  15. [142]
    Mickens RE (1990) Difference equations: theory and applicatons, 2nd edn. Springer, New YorkGoogle Scholar
  16. [156]
    Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1993) Numerical recipes in C: the art of scientific computing, 2nd edn. Cambridge University Press, CambridgezbMATHGoogle Scholar
  17. [209]
    Whitle P (1951) Hypothesis testing in time series analysis. Almqvist & Wiksells, UppsalaGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Robert H. Shumway
    • 1
  • David S. Stoffer
    • 2
  1. 1.Department of StatisticsUniversity of California, DavisDavisUSA
  2. 2.Department of StatisticsUniversity of PittsburghPittsburghUSA

Personalised recommendations