Performance comparison of two segmentation algorithms using growing reference windows

  • U. Appel
  • A. v. Brandt
Session 4 Detection Of Changes In Systems
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 62)


Two procedures designed for the detection of parameter jumps in autoregressive gaussian distributed processes — the generalized likelihood ratio (GLR) algorithm and the cumulated sum (CUSUM) algorithm — are compared regarding their performance. Both algorithms share as a common feature a growing reference window and a sliding fixed length test window, but use different detection statistics. Some rough features of the algorithms are deducted using means instead of the stochastic signal itself. More detailed results are then obtained from extensive simulations performed with different types of parameter jumps in the test signals. As a general result, it is shown that the CUSUM procedure may perform slightly better with respect to the detection of spurious jumps, if direction and distance of the jump is known in advance. On the other hand, the GLR algorithm leads to much better results in the detection and particularly the positioning of jumps succeeding each other in a short time interval (“short segments”). Moreover, the GLR algorithm is more robust considering the application of a segmentation procedure under realistic assumptions.


False Alarm Rate Segmentation Algorithm Segment Boundary Data Window Generalize Likelihood Ratio Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. /1/.
    A. v.Brandt, "On-line Segmentation of Time Series Using a Pilot Segment", Proc. Int. Conf. on Digital Signal Processing, Florence, 1981, pp. 1111–1118Google Scholar
  2. /2/.
    U. Appel, A. v. Brandt, "Adaptive Sequential Segmentation of Piecewise Stationary Time Series", Information Sciences 29, 1983, pp. 27–56Google Scholar
  3. /3/.
    G. Bodenstein and H.M. Praetorius, "Feature extraction from the electroencephalogram by adaptive segmentation", Proc. IEEE, vol.65, no. 5, May 1977, pp. 642–652Google Scholar
  4. /4/.
    B.H. Jansen, A. Hasman and R. Lenten, "Piecewise analysis of EEGs using AR-modeling and clustering", Computers and biomed. Res. 14, 1981, pp. 168–178Google Scholar
  5. /5/.
    D. Michael and J. Houchin, "Automatic EEG analysis: A segmentation procedure based on the autocorrelation function", Electroenceph. clin. Neurophysiol., vol. 46, 1979, pp. 232–235Google Scholar
  6. /6/.
    J.S. Barlow, O.D. Creutzfeld, D. Michael, J. Houchin and H. Epelbaum, "Automatic adaptive segmentation of clinical EEGs", Electroenceph. clin. Neurophysiol., 1981, vol. 51, pp. 512–525Google Scholar
  7. /7/.
    J. Rissanen, "Modelling by shortest data description", Automatica, vol. 14, 1978, pp. 465–471Google Scholar
  8. /8/.
    A.C. Sanderson, J. Segen and E. Richey, "Hierarchical modeling of EEG signals", IEEE Trans. Pattern Analysis, Machine Int., vol. PAMI-2, no. 5, Sept. 1980, pp. 405–415Google Scholar
  9. /9/.
    M. Basseville, A. Benveniste, "Sequential Detection of Abrupt Changes in Spectral Characteristics of Digital Signals", IEEE Trans. Inf. Theory, Sept. 1983; (see also: Research Report no. 129, INRIA, Centre de Rennes, April 1982)Google Scholar
  10. /10/.
    U. Appel, A. v.Brandt, "A Comparative Study of Three Sequential Time Series Segmentation Algorithms", To appear in Signal Processing 1984Google Scholar
  11. /11/.
    A.S. Willsky and H.L. Jones, "A Generalized Likelihood Ratio Approach to the Detection and Estimation of Jumps in Linear Systems", IEEE Trans. on Automatic Control, vol. AC-21, 1976, pp. 108–112.Google Scholar

Copyright information

© Springer-Verlag 1984

Authors and Affiliations

  • U. Appel
    • 1
  • A. v. Brandt
    • 1
  1. 1.Bundeswehr University / FB-ETNeubiberg

Personalised recommendations