Advertisement

Learning a deterministic finite automaton with a recurrent neural network

  • Laura Firoiu
  • Tim Oates
  • Paul R. Cohen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1433)

Abstract

We consider the problem of learning a finite automaton with recurrent neural networks from positive evidence. We train an Elman recurrent neural network with a set of sentences in a language and extract a finite automaton by clustering the states of the trained network. We observe that the generalizations beyond the training set, in the language recognized by the extracted automaton, are due to the training regime: the network performs a “loose” minimization of the prefix DFA of the training set, the automaton that has a state for each prefix of the sentences in the set.

Keywords

Network State Recurrent Neural Network Hide Unit Finite Automaton Input Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8:1135–1178, 1996.Google Scholar
  2. 2.
    A. Cleeremans, D. Servan-Schreiber, and J.L. McClelland. Finite state automata and simple recurrent networks. Neural Computation, 1:372–381, 1989.Google Scholar
  3. 3.
    P. R. Cohen. Empirical Methods for Artificial Intelligence. The MIT Press, 1995.Google Scholar
  4. 4.
    J. L. Elman. Finding structure in time. Cognitive science, 14:179–211, 1990.CrossRefGoogle Scholar
  5. 5.
    J. L. Elman. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 1992.Google Scholar
  6. 6.
    C. L. Giles, C. B. Miller, D. Chen, G. Z. Sun, H. H. Chen, and Y. C. Lee. Extracting and learning an unknown grammar with recurrent neural networks. In Advances in Neural Information Processing Systems 4. 1992.Google Scholar
  7. 7.
    E. M. Gold. Language identification in the limit. Information and control, 10:447–474, 1967.zbMATHCrossRefGoogle Scholar
  8. 8.
    John F. Kolen. Fool's gold: Extracting finite state machines from recurrent network dynamics. In Advances in Neural Information Processing Systems 6, 1994.Google Scholar
  9. 9.
    E. Makinen. Inferring regular languages by merging nonterminals. Technical Report A-1997-6, Department of Computer Science, University of Tampere, 1997.Google Scholar
  10. 10.
    Christian W. Omlin and C. Lee Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 45(6):937, 1996.MathSciNetCrossRefGoogle Scholar
  11. 11.
    Leonard Pitt and Manfred K. Warmuth. The minimum consistent dfa problem cannot be approximated within any polynomial. Journal of the ACM, 40(1):95–142, 1993.zbMATHMathSciNetCrossRefGoogle Scholar
  12. 12.
    D. E. Rumelhart, R. Durbin, R. Golden, and Y. Chauvin. Backpropagation: The basic theory. In Backpropagation: Theory, architectures, and applications. Erlbaum, 1993.Google Scholar
  13. 13.
    H. T Siegelmann. Theoretical Foundations of Recurrent Neural Networks. PhD thesis, Rutgers, 1992.Google Scholar
  14. 14.
    P. N. Werbos. The roots of backpropagation. John Wiley & Sons, Inc., 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Laura Firoiu
    • 1
  • Tim Oates
    • 1
  • Paul R. Cohen
    • 1
  1. 1.Computer Science DepartmentUniversity of Massachusetts at AmherstUSA

Personalised recommendations