Skip to main content

Hybrid Genetic Learning of Hidden Markov Models for Time Series Prediction

  • Chapter
Bio-Mimetic Approaches in Management Science

Abstract

This paper presents how a hybrid genetic/gradient search can be used to learn hidden Markov models in the context of time series prediction. This learning algorithm called GHOSP uses a gradient search, namely the Baum Welch algorithm, as a local search operator in the main loop of a genetic algorithm, in conjunction with standard genetic operators adapted for hidden Markov models. GHOSP is able to learn efficiently the coefficients and the architecture of hidden Markov models in order to maximize the probability of generating an observation O. This observation is used to encode the recent past of a time series. Once an efficient stochastic model of the series has been learned, this model can be used to predict the next values of the series. We apply this framework to several standard series including economical ones.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Agazzi O. and Kuo S.S., Keyword spotting in poorly printed documents using pseudo-2D HMMs, IEEE Transactions on pattern recognition and machine intelligence, vol. 16, no. 8, pp-842–848, 1994.

    Google Scholar 

  • Asselin de Beauville J.P., M. Slimane, G.Venturini, J.L. Laporte, M. Narbey, Two hybrid gradient and genetic search algorithms for learning hidden Markov models, Workshop on Evolutionary Computing and Machine Learning, ICML’96, Bari, July 3–6th, pp 5–12, 1996

    Google Scholar 

  • Baum L.E., J.A.Eagon, An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology, Bull.Amer.Soc. 73, pp 360–363, 1967.

    Article  Google Scholar 

  • Baum L.E., A inequality and associated maximization technique in statistical estimation for probabilistic functions of Markov processes, Inequalities 3, pp 1–8, 1972.

    Google Scholar 

  • Box G.E.P. and Jenkins F.M., Time series analysis: forecasting and control, 2“d ed. Oakland, CA: Holden-Day.

    Google Scholar 

  • Brouard T., M. Slimane, G. Venturini, J.P. Asselin de Beauville, Apprentissage du nombre d’états d’une chaîne de Markov cachée pour la reconnaissance d’images, soumis à GRETSI’97, Grenoble, septembre 1997.

    Google Scholar 

  • Chatfield C., The analysis of time series: an introduction, 4th edition, Chapman and Hall, 1989.

    Google Scholar 

  • De Garis H., Using the genetic algorithm to train time dependent behaviors in neural networks, Proceedings of the First International Workshop on Multistrategy Learning 1991, R.S. Michalski and G. Tecuci (Eds), pp 273–280, 1991.

    Google Scholar 

  • De Jong K. Learning with Genetic Algorithms: An overview. Machine Learning 3, pp 121–138, 1988.

    Article  Google Scholar 

  • Deng L., A generalized hidden Markov model with state-conditioned trend functions of time for speech signal, Signal processing, Elsevier No 27, pp-65–78, 1992.

    Google Scholar 

  • Fraser A.M. and Dimitriadis A., Forecasting probability densities by using hidden Markov models with mixed states, in (Weigend and Gershenfeld 1993), pp 265–282.

    Google Scholar 

  • Iba H., Sato T. and de Gans H., Temporal data processing with genetic programming, Proceedings of the Sixth International Conference on Genetic Algorithms, 1995, L.J. Eshelman ( Ed ), Morgan Kaufmann, pp 279–286.

    Google Scholar 

  • Holland J.H., Adaptation in natural and artificial systems, Ann Arbor/University of Michigan Press, sp, 1975.

    Google Scholar 

  • Holland J.H. Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems, Machine Learning: an AI approach, volume-3, RS. Michalski, T.M. Mitchell!, J.G. Carbonell et Y. Kodratoff (Eds), Morgan Kaufmann, pp 593–623, 1986.

    Google Scholar 

  • Howard E. and Oakley N., The application of genetic programming to the investigation of short, noisy, choatic data series, AISB Worshop 1994, Selected papers, T.C. Fogarty (ed), Lecture Notes in Computer Science 865, Springer Verlag, pp 320–332.

    Google Scholar 

  • Koza J.R., Hierarchical genetic algorithm operating on populations of computer programs, Proceedings of the 11th International Joint Conference on Artificial Intelligence, IJCAI 1989, Morgan Kaufmann, pp 768–774.

    Google Scholar 

  • Levinson S.E., L.R. Rabiner, M.M. Sondhi, An introduction to the application of the theory of probabilistic functions of Markov process to automatic speech recognition, The Bell System Technical Journal, 62(4), sp, 1983

    Google Scholar 

  • Mahfoud S.W., A comparison of parallel and sequential niching methods, Proceedings of the Sixth International Conference on Genetic Algorithms, 1995, L.J. Eshelman ( Ed ), Morgan Kaufmann, pp 136–143.

    Google Scholar 

  • Mozer M.C., Neural net architectures for temporal sequence processing„ in (Weigend and Gershenfeld 1993), pp 243–264.

    Google Scholar 

  • Rabiner L.R., A tutorial on hidden Markov models and selected application in speech recognition, Proceedings of IEEE, vol 77, pp 257–286, 1989.

    Article  Google Scholar 

  • Robertson G.G. and Riolo R.L., A tale of two classifier systems, Machine Learning 3, pp 139–159, 1988.

    Google Scholar 

  • Slimane M., J.P. Asselin de Beauville, Introduction aux modèles de Markov cachés du premier ordre (1k’ Partie), Rapport interne n°171, LI EIII, Tours, 36p, 1994

    Google Scholar 

  • Slimane M., G.Venturini, J.P. Asselin de Beauville, T. Brouard, A. Brandeau, Optimizing Hidden Markov Models with a genetic algorithm, Artificial Evolution, Lecture Notes in Computer Science, Vol 1063, Springer Verlag, pp 384–396, 1996.

    Google Scholar 

  • Torreele J., Temporal processing with recurrent networks: an evolutionary approach, Proceedings of the Fourth International Conference on Genetic Algorithms, 1991, R.K. Belew and L.B. Booker (Eds), Morgan Kaufmann, pp-555–561, 1991.

    Google Scholar 

  • Viterbi A.J., Error bounds for convolutional codes and asymptotically optimum decoding algorithm, IEEE transactions on information theory, IT-13: 260–269, 1967.

    Google Scholar 

  • Weigend AS and Gershenfeld NA. “Time Series Prediction: Forecasting the Future and Understanding the Past.” In SFI Studies in the Sciences of Complexity, Proc. Vol. XV, Addison-Wesley: 1993.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Slimane, M., Venturini, G., Asselin de Beauville, JP., Brouard, T. (1998). Hybrid Genetic Learning of Hidden Markov Models for Time Series Prediction. In: Aurifeille, JM., Deissenberg, C. (eds) Bio-Mimetic Approaches in Management Science. Advances in Computational Management Science, vol 1. Springer, Boston, MA. https://doi.org/10.1007/978-1-4757-2821-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-1-4757-2821-7_12

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4419-4791-8

  • Online ISBN: 978-1-4757-2821-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics