An Extension of the Gauss-Markov Theorem for Mixed Linear Regression Models with Non-Stationary Stochastic Parameters

  • Estela Bee Dagum
  • Pierre A. Cholette

Abstract

The presence of fixed and stochastic parameters in a mixed linear regression model has been dealt with for the case where the stochastic parameters follow a stationary process. The solution is given either by Generalized Least Squares (e.g. Rao, 1965, p. 192) or by a recursive state space estimation procedure such as the Kalman filter and smoother (e.g. Sallas and Harville 1981). The estimation of non-stationary stochastic parameters has been mainly approached in the state space framework, either as an initial condition problem (see among others, Ansley and Kohn, 1985, 1989; Kohn and Ansley, 1986; Bell and Hillmer, 1991 and De Jong, 1989, 1991); or as a hierarchical model with “fixed” effects in the hierarchy given a flat prior distribution (see Sallas and Harville, 1981, 1988; Tsimikas and Ledolter, 1994).

Keywords

Covariance Income 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Ansley, C.F., Kohn, R. (1985) “Estimation, Filtering, and Smoothing in State Space Models with Incompletely Specified Initial Conditions”, The Annals of Statistics, Vol. 13, No. 4, pp. 1286–1316.CrossRefGoogle Scholar
  2. [2]
    Ansley, C.F. and Kohn, R. (1989), “Filtering and Smoothing in State Space Models with Partially Diffuse Initial Conditions”, Journal of Time Series Analysis, Vol. 11 pp. 275–293.CrossRefGoogle Scholar
  3. [3]
    Bell, W.R. and Hillmer, S.C. (1991), “Initializing the Kalman Filter for Nonstationary Time Series Models”, Journal of Time Series Analysis, Vol. 12, pp. 283–300.CrossRefGoogle Scholar
  4. [4]
    Boot, J.C.G., Feibes, W. and Lisman, J.H.C. (1967), “Further Methods of Derivation of Quarterly Figures from Annual Data”, Applied Statistics, Vol. 16, no. 1, pp. 65–75CrossRefGoogle Scholar
  5. [5]
    Box, G.E.P. and Jenkins, G.M. (1970) Time series Analysis, Forecasting and Control,Holden-Day.Google Scholar
  6. [6]
    Chen, Z. G., Cholette, P.A. and Dagum, E. B. (1997), “A Nonparametric Method for Benchmarking Survey Data via Signal Extraction”, Journal of the American Statistical Association, Vol. 92, No. 440, pp. 1563–1571.CrossRefGoogle Scholar
  7. [7]
    Cholette, P.A and Dagum, E. B. (1994), “Benchmarking Time Series with Autocorrelated Sampling Errors”, International Statistical Review, Vol. 62, pp. 365–377.CrossRefGoogle Scholar
  8. [8]
    Chow, G.C. and Lin, A.-L. (1971), “Best Linear Unbiased Interpolation, Distribution and Extrapolation of Time Series by Related Series”, Review of Economics and Statistics, Vol. 53, No. 4, pp. 372–375.CrossRefGoogle Scholar
  9. [9]
    Cohen, K.J., Müller, W. and Padberg, M.W. (1971) “Autoregressive Approaches to the Disaggregation of Time Series Data”, Applied Statistics, Vol. 20, pp 119–129.CrossRefGoogle Scholar
  10. [10]
    Dagum, E. B., Cholette, P.A. Chen, Z.-G. (1998), “A Unified View of Signal Extraction, Benchmarking and Interpolation of Time Series”, forthcoming in International Statistical Review.Google Scholar
  11. [11]
    De Jong, P. (1991), “The Diffuse Kalman Filter”, The Annals of Statistics, Vol. 19, pp. 1073–1083.CrossRefGoogle Scholar
  12. [12]
    Duncan, D.B. and Horn, S.D. (1972), “Linear Dynamic Recursive Estimation from the Viewpoint of Regression Analysis”, Journal of the American Statistical Association, Vol. 67, pp. 815–821.CrossRefGoogle Scholar
  13. [13]
    Guerrero, V. M. (1989) “Optimal Conditional ARIMA Forecast”, Journal of Forecasting, Vol. 8, pp. 215–229CrossRefGoogle Scholar
  14. [14]
    Kohn, R. and Ansley, C.F. (1986), “Estimation, Prediction and Interpolation for ARIMA Models with Missing Data”, Journal of the American Statistical Association, Vol. 79, pp. 125–131.Google Scholar
  15. [15]
    Pankratz, A. (1989), “Time Series Forecasts and Extra-Model Information”, Journal of Forecasting, Vol. 8, pp. 75–83.CrossRefGoogle Scholar
  16. [16]
    Rao, C.R. (1965), Linear Statistical Inference and its Applications,John Wiley.Google Scholar
  17. [17]
    Robinson, G.K. (1991), “That BLUP Is a Good Thing: The Estimation of Random Effects”, Statistical Science, Vol. 6, No. 1 pp. 15–51.CrossRefGoogle Scholar
  18. [18]
    Sallas, W.M., Harville, D.A. (1981), “Best Linear Recursive Estimation for Mixed Linear Models”, Journal of the American Statistical Association, Vol. 76, pp. 860–869.CrossRefGoogle Scholar
  19. [19]
    Sallas, W.M. and Harville, D.A. (1988), “Noninformative Priors and Restricted Maximum Likelihood Estimation in the Kalman Filter” in Bayesian Analysis of Time Series and Dynamic Models ( J.C. Spall ed.), New York: Marcel Dekker.Google Scholar
  20. [20]
    Stram, D.O. and Wei, W.W.S. (1986), “A Methodological Note on the Disaggregation of Time Series Totals”, Journal of Time Series Analysis, Vol. 7, pp. 293–302.CrossRefGoogle Scholar
  21. [21]
    Trabelsi, A. and Hillmer, S.C. (1989), “A Benchmarking Approach to Forecast Combination”, Journal of Business and Economic Statistics, Vol. 7, pp. 353–362.CrossRefGoogle Scholar
  22. [22]
    Tsimikas, J., Ledolter, J. (1994), •REML and Best Linear Unbiased Prediction in State Space Models 0, Communications in Statistics, Theory and Methods, Vol. 23, No. 8, pp. 2253–2268.Google Scholar
  23. [23]
    Whittle, P. (1963), Prediction and Regulation, New York, D. Van Nostrand.Google Scholar

Copyright information

© Physica-Verlag Heidelberg 1999

Authors and Affiliations

  • Estela Bee Dagum
    • 1
  • Pierre A. Cholette
    • 2
  1. 1.Statistical SciencesUniversity of BolognaBolognaItaly
  2. 2.Time Series Research and Analysis CentreStatistics CanadaOttawaCanada

Personalised recommendations