Skip to main content

Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2036))

Abstract

Finite-state machines are the most pervasive models of com- putation, not only in theoretical computer science, but also in all of its applications to real-life problems, and constitute the best characterized computational model. On the other hand, neural networks -proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings- have evolved into a great variety of so-called artificial neural networks. Artificial neural networks have be- come a very successful tool for modelling and problem solving because of their built-in learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finite-state machines and (mainly discrete-time recurrent) neural networks, and suggests possible ways to construct finite-state models in biologically plausible neural networks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W.S. McCulloch and W.H. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.

    Article  MATH  MathSciNet  Google Scholar 

  2. J.E. Hopcroft and J.D. Ullman. Introduction to automata theory, languages, and computation. Addison-Wesley, Reading, MA, 1979.

    MATH  Google Scholar 

  3. S.C. Kleene, Representation of events in nerve nets and finite automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3–42. Princeton University Press, Princeton, N.J., 1956.

    Google Scholar 

  4. M.L. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1967. Ch: Neural Networks. Automata Made up of Parts.

    MATH  Google Scholar 

  5. N. Alon, A.K. Dewdney, and T.J. Ott. Efficient simulation of finite automata by neural nets. Journal of the Association of Computing Machinery, 38(2):495–514, 1991.

    MATH  MathSciNet  Google Scholar 

  6. P. Indyk. Optimal simulation of automata by neural nets. In Proceedings of the 12th Annual Symposium on Theoretical Aspects of Computer Science, pages 337–348, Berlin, 1995. Springer-Verlag.

    Google Scholar 

  7. B.G. Horne and D.R. Hush. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks, 9(2):243–252, 1996.

    Article  Google Scholar 

  8. Tony Robinson and Frank Fallside. A recurrent error propagation network speech recognition system. Computer Speech and Language, 5:259–274, 1991.

    Article  Google Scholar 

  9. J.L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.

    Article  Google Scholar 

  10. A. Cleeremans, D. Servan-Schreiber, and J.L. McClelland. Finite state automata and simple recurrent networks. Neural Computation, 1(3):372–381, 1989.

    Article  Google Scholar 

  11. Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:22–252, 1991.

    Google Scholar 

  12. C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, and Y.C. Lee. Learning and extracted finite state automata with second-order recurrent neural networks. Neural Computation, 4(3):393–405, 1992.

    Article  Google Scholar 

  13. R.L. Watrous and G.M. Kuhn. Induction of finite-state languages using second-order recurrent networks. Neural Computation, 4(3):406–414, 1992.

    Article  Google Scholar 

  14. Arun Maskara and Andrew Noetzel. Forcing simple recurrent neural networks to encode context. In Proceedings of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics, 1992.

    Google Scholar 

  15. A. Sanfeliu and R. Alquézar. Active grammatical inference: a new learning methodology. In Dov Dori and A. Bruckstein, editors, Shape and Structure in Pattern Recognition, Singapore, 1994. World Scientific. Proceedings of the IAPR International Workshop on Structural and Syntactic Pattern Recognition SSPR’94 (Nahariya, Israel).

    Google Scholar 

  16. P. Manolios and R. Fanelli. First order recurrent neural networks and deterministic finite state automata. Neural Computation, 6(6):1154–1172, 1994.

    Article  Google Scholar 

  17. M.L. Forcada and R.C. Carrasco. Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation, 7(5):923–930, 1995.

    Article  Google Scholar 

  18. Peter Tiňo and Jozef Sajda. Learning and extracting initial Mealy automata with a modular neural network model. Neural Computation, 7(4), July 1995.

    Google Scholar 

  19. R.P. Ñeco and M.L. Forcada. Beyond Mealy machines: Learning translators with recurrent neural networks. In Proceedings of the World Conference on Neural Networks’ 96, pages 408–411, San Diego, California, September 15-18 1996.

    Google Scholar 

  20. Marco Gori, Marco Maggini, E. Martinelli, and G. Soda. Inductive inference from noisy examples using the hybrid finite state filter. IEEE Transactions on Neural Networks, 9(3):571–575, 1998.

    Article  Google Scholar 

  21. M.W. Goudreau, C.L. Giles, S.T. Chakradhar, and D. Chen. First-order vs. second-order single layer recurrent neural networks. IEEE Transactions on Neural Networks, 5(3):511–513, 1994.

    Article  Google Scholar 

  22. Rafael C. Carrasco, Mikel L. Forcada, M. Ángeles Valdés-Muñoz, and Ramón P. Ñeco. Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units. Neural Computation, 12, 2000. In press.

    Google Scholar 

  23. Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.

    Article  Google Scholar 

  24. J.F. Kolen and Jordan B. Pollack. The observer’s paradox: apparent computational complexity in physical systems. Journal of Experimental and Theoretical Artificial Intelligence, 7:253–277, 1995.

    Article  Google Scholar 

  25. J.F. Kolen. Fool’s gold: Extracting finite state machines from recurrent network dynamics. In J. D. Cowan, G. Tesauro,, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 501–508, San Mateo, CA, 1994. Morgan Kaufmann.

    Google Scholar 

  26. M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6):1135–1178, 1996.

    Article  Google Scholar 

  27. A. Blair and J.B. Pollack. Analysis of dynamical recognizers. Neural Computation, 9(5):1127–1142, 1997.

    Article  Google Scholar 

  28. C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6):937–972, 1996.

    Article  MATH  MathSciNet  Google Scholar 

  29. R. Alquézar and A. Sanfeliu. An algebraic framework to represent finite state automata in single-layer recurrent neural networks. Neural Computation, 7(5):931–949, 1995.

    Article  Google Scholar 

  30. Stefan C. Kremer. A Theory of Grammatical Induction in the Connectionist Paradigm. PhD thesis, Department of Computer Science, University of Alberta, Edmonton, Alberta, 1996.

    Google Scholar 

  31. Paolo Frasconi, Marco Gori, Marco Maggini, and Giovanni Soda. Representation of finite-state automata in recurrent radial basis function networks. Machine Learning, 23:5–32, 1996.

    MATH  Google Scholar 

  32. Jiří Šíma. Analog stable simulation of discrete neural networks. Neural Network World, 7:679–686, 1997.

    Google Scholar 

  33. Jiří Šíma and Jiří Wiedermann. Theory of neuromata. Journal of the ACM, 45(1):155–178, 1998.

    Article  MathSciNet  MATH  Google Scholar 

  34. Ramón P. ðeco, Mikel L. Forcada, Rafael C. Carrasco, and M. Ángeles Valdés-Muñoz. Encoding of sequential translators in discrete-time recurrent neural nets. In Proceedings of the European Symposium on Artificial Neural Networks ESANN’99, pages 375–380, 1999.

    Google Scholar 

  35. Rafael C. Carrasco, Jose Oncina, and Mikel L. Forcada. Efficient encodings of finite automata in discrete-time recurrent neural networks. In Proceedings of ICANN’99, International Conference on Artificial Neural Networks, 1999. (in press).

    Google Scholar 

  36. B.A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks, 6(5):1212–1228, 1995.

    Article  Google Scholar 

  37. F.J. Pineda. Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59(19):2229–2232, 1987.

    Article  MathSciNet  Google Scholar 

  38. L.B. Almeida. Backpropagation in perceptrons with feedback. In R. Eckmiller and Ch. von der Malsburg, editors, Neural Computers, pages 199–208, Neuss 1987, 1988. Springer-Verlag, Berlin.

    Google Scholar 

  39. S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996.

    Article  Google Scholar 

  40. Thomas Natschläger and Wolfgang Maass. Fast analog computation in networks of spiking neurons using unreliable synapses. In Proceedings of ESANN’99, European Symposium on Artificial Neural Networks, pages 417–422, 1999.

    Google Scholar 

  41. Thomas Wennekers. Synfire graphs: From spike patterns to automata of spiking neurons. Technical Report Ulmer Informatik-Berichte Nr. 98-08, Universität Ulm, Fakultät für Informatik, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Forcada, M.L., Carrasco, R.C. (2001). Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?. In: Wermter, S., Austin, J., Willshaw, D. (eds) Emergent Neural Computational Architectures Based on Neuroscience. Lecture Notes in Computer Science(), vol 2036. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44597-8_34

Download citation

  • DOI: https://doi.org/10.1007/3-540-44597-8_34

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42363-8

  • Online ISBN: 978-3-540-44597-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics