Skip to main content

Hidden Recursive Models

  • Conference paper
Book cover Neural Nets WIRN VIETRI-97

Part of the book series: Perspectives in Neural Computing ((PERSPECT.NEURAL))

  • 107 Accesses

Abstract

Hidden Markov models (HMMs) and input/output HMMs are probabilistic graphical models for sequence learning. When thinking in terms of data structures, a sequence can be thought of as a linear chain. Depending on the task, however, the entities that need to be adaptively processed may be organized into data structures more complex that simple linear chains. In this paper we propose a theoretical framework to extend (input/output) HMMs for processing information structured according to any directed ordered acyclic graph. The resulting hidden recursive model (HRM) can be applied to problems of data structures classification or transduction. We report experimental results for tree automata induction tasks and in a simple logical terms classification problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bengio, Y., and Frasconi, P. (1996). Input-output HMM’s for sequence processing. IEEE Transactions on Neural Networks, 7 (5), 1231–1249.

    Article  Google Scholar 

  • Jensen, F., Lauritzen, S., and Olesen, K. (1990). Bayesian updating in recursive graphical models by local computations. Computational Statistical Quarterly, 4, 269–282.

    MathSciNet  Google Scholar 

  • Omlin, C., and Giles, C. (1996). Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 5 (6), 937–972.

    Article  MathSciNet  Google Scholar 

  • Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.

    Google Scholar 

  • Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected appli¬cations in speech recognition. Proceedings of the IEEE, 77 (2), 257–286.

    Article  Google Scholar 

  • Sperduti, A., and Starita, A. ((to appear)). Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks.

    Google Scholar 

  • Thacher, J. (1973). Tree automata: An informal survey. In Aho, A. (Ed.), Currents in the Theory of Computing, pp. 1432013172. Prentice-Hall Inc., Englewood Cliffs.

    Google Scholar 

  • Williams, R. J., k Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1 (2), 270–280.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this paper

Cite this paper

Frasconi, P., Gori, M., Sperduti, A. (1998). Hidden Recursive Models. In: Marinaro, M., Tagliaferri, R. (eds) Neural Nets WIRN VIETRI-97. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1520-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1520-5_32

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-1522-9

  • Online ISBN: 978-1-4471-1520-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics