Introduction to the DLM: The First-Order Polynomial Model

  • Mike West
  • Jeff Harrison
Part of the Springer Series in Statistics book series (SSS)

Abstract

Many important underlying concepts and analytic features of dynamic linear models are apparent in the simplest and most widely used case of the first-order polynomial model. By way of introduction to DLMs, this case is described and examined in detail in this Chapter. The first-order polynomial model is the simple, yet non-trivial, time series model in which the observation series Y t is represented as Y t = μ t + ν t , μ t being the current level of the series at time t, and ν t ∼ N[0, V t ] the observational error or noise term. The time evolution of the level of the series is a simple random walk μ t = μ t−1 + ω t , with evolution error ω t ∼ N[0, W t ]. This latter equation describes what is often referred to as a locally constant mean model. Note the assumption that the two error terms, observational and evolution errors, are normally distributed for each t. In addition we adopt the assumptions that the error sequences are independent over time and mutually independent. Thus, for all t and all s with ts, ε t and ε s are independent, ω t and ω s are independent, and ν t and ω s are independent. Further assumptions at this stage are that the variances V t and W t are known for each time t. Figure 2.1 shows two examples of such Y t series together with their underlying μ t processes. In each the starting value is μ0 = 25, and the variances defining the model are constant in time, V t = V and W t = W, having values V = 1 in both cases and evolution variances (a) W = 0.05, (b) W = 0.5. Thus in (a) the movement in the level over time is small compared to the observational variance, W = V/20, leading to a typical locally constant realisation, whereas in (b) the larger value of W leads to greater variation over time in the level of the series.

Keywords

Discount Factor Time Series Model Constant Model Observational Variance Exponentially Weight Move Average 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 1989

Authors and Affiliations

  • Mike West
    • 1
  • Jeff Harrison
    • 2
  1. 1.Institute of Statistics and Decision SciencesDuke UniversityDurhamUSA
  2. 2.Department of StatisticsUniversity of WarwickCoventryUK

Personalised recommendations