Advertisement

Robust Implementation of Finite Automata by Recurrent RBF Networks

  • Michal Šorel
  • Jiří Šíma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1963)

Abstract

In this paper a recurrent network, which consists of O(√m log m) RBF (radial basis functions)units with maximum norm employing any activation function that has different values in at least two nonnegative points, is constructed so as to implement a given deterministic finite automaton with m states the underlying simulation proves to be robust with respect to analog noise for a large class of smooth activation functions with a special type of inflexion.

Keywords

Radial Basis Function Recurrent Neural Network Finite Automaton Neural Computation Kolmogorov Complexity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alon, N., Dewdney, A. K., Ott,. J.Efficient simulation of finite automata by neural nets. Journal of the ACM 38 495–514, 1991. 433zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Balcázar, J.L., Gavaldá, R., Siegelmann, H.T. Computational power of neural networks: A characterization in terms of Kolmogorov complexity. IEEE Transactions of Information Theory 43 1175–1183, 1997. 431zbMATHCrossRefGoogle Scholar
  3. 3.
    Broomhead, D.S., Lowe, D. Multivariable functional interpolation and adaptive networks.Complex Systems 2 321–355, 1988. 431zbMATHMathSciNetGoogle Scholar
  4. 4.
    Das, S., Mozer, M.C. A unified gradient-descent/clustering architecture for finite state machine induction. In J. Cowan, G. Tesauro, and J. Alspector, editors, Neural Information Processing Systems 6 19–26, 1994. 432Google Scholar
  5. 5.
    Frasconi, P., Gori, M., Maggini, M., Soda, G. A unified approach for integrating explicit knowledge and learning by example in recurrent networks. In Proceedings of the IEEE International Joint Conference on Neural Networks IJCNN’91 Seattle, vol.I 881–916, IEEE Press, New York, 1991. 432Google Scholar
  6. 6.
    Frasconi, P., Gori, M., Maggini, M., Soda, G. Representation of finite state automata in recurrent radial basis function networks. Machine Learning 23 5–32, 1996. 432zbMATHGoogle Scholar
  7. 7.
    Giles, C.L., Miller, C.B., Chen, D., Chen, H.H., Sun, G.Z., Lee, Y. C. Learning and extracting finite state automata with second-order recurrent neural networks. Neural Computation 4 393–405,1992. 432CrossRefGoogle Scholar
  8. 8.
    Gori, M., Maggini, M., Soda, G. Inductive inference with recurrent radial basis function networks. In Proceedings of the International Conference on Artificial Neural Networks ICANN’94 Sorrento, Italy, 238–241, Springer-Verlag, 1994. 432Google Scholar
  9. 9.
    Haykin, S. Neural Networks: A Comprehensive Foundation. Prentice-Hall, Upper Saddle River, NJ, 2nd edition, 1999. 431Google Scholar
  10. 10.
    Horne, B. G., Hush, D. R. On the node complexity of neural networks. Neural Networks 7 1413–1426, 1994. 434CrossRefGoogle Scholar
  11. 11.
    Horne, B. G., Hush, D. R. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks 9 243–252, 1996. 432CrossRefGoogle Scholar
  12. 12.
    Indyk, P. Optimal simulation of automata by neural nets. In Proceedings of the Twelfth Annual Symposium on Theoretical Aspects of Computer Science STACS’95 vol.900 of LNCS, 337–348, Springer-Verlag, Berlin, 1995. 431, 432, 433, 437Google Scholar
  13. 13.
    Kilian, J., Siegelmann, H.T. The dynamic universality of sigmoidal neural networks. Information and Computation 128 48–56, 1996. 437zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Kleene, S.C. Representation of Events in Nerve Nets and Finite Automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies vol.34 of Annals of Mathematics Studies 3–41, Princeton University Press, NJ, 1956. 431Google Scholar
  15. 15.
    Maass, W., Orponen, P. On the effect of analog noise in discrete-time analog computations. Neural Computation 10 1071–1095, 1998. 432, 437CrossRefGoogle Scholar
  16. 16.
    Manolios, P., Fanelli, R. First-order recurrent neural networks and deterministic finite state automata. Neural Computation 6 1155–1173, 1994. 432CrossRefGoogle Scholar
  17. 17.
    Minsky, M.L., Papert, S.A. Perceptrons. MI Press, Cambridge, MA, 1969. 431zbMATHGoogle Scholar
  18. 18.
    Moody, J.E., Darken, C.J. Fast learning in networks of locally-tuned processing units. Neural Computation 1 281–294, 1989. 431CrossRefGoogle Scholar
  19. 19.
    Omlin, C.W., Giles, C.L. Training second-order recurrent neural networks using hints. In D. Sleeman and P. Edwards, editors, Proceedings of the Ninth International Conference on Machine Learning 363–368, San Mateo, CA, Morgan Kaufman Publishers, 1992. 432Google Scholar
  20. 20.
    Omlin, C.W., Giles, C.L. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM 43 937–972, 1996. 432, 437zbMATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    Poggio, T., Girosi, F. Networks for approximation and learning. In Proceedings of the IEEE 78 1481–1497, 1990. 431Google Scholar
  22. 22.
    Powell, M. J. D. Radial basis functions for multivariable interpolation:A review. In J.C. Mason and M.G. Cox, editors, Proceedings of the IMA Conference on Algorithms for the Approximation of Functions and Data RMCS, Shrivenham, UK, 143–167, Oxford Science Publications, 1985. 431Google Scholar
  23. 23.
    Renals, S. Radial basis function network for speech pattern classification. Electronics Letters 25 437–439, 1989. 431CrossRefGoogle Scholar
  24. 24.
    Siegelmann, H.T., Sontag, E.D. Computational power of neural networks. Journal of Computer System Science 50 132–150, 1995. 431, 433, 437zbMATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    Šíma, J. Analog stable simulation of discrete neural networks. Neural Network World 7 679–686, 1997. 432, 437Google Scholar
  26. 26.
    Šíma, J., Wiedermann, J. Theory of neuromata.Journal of the ACM 45 155–178, 1998. 432, 433zbMATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Tiňo, P., Šajda, J. Learning and extracting initial mealy automata with a modular neural network model. Neural Computation 7 822–844, 1995. 432CrossRefGoogle Scholar
  28. 28.
    Zeng, Z., Goodman, R., Smyth, P. Learning finite state machines with selfclustering recurrent networks, Neural Computation 5 976–990, 1993. 432CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Michal Šorel
    • 1
  • Jiří Šíma
    • 2
    • 3
  1. 1.Institute of Information Theory and AutomationAcademy of Sciences of the Czech RepublicPrague 8Czech Republic
  2. 2.Institute of Computer ScienceAcademy of Sciences of the Czech RepublicPrague 8Czech Republic
  3. 3.Institute for Theoretical Computer Science (ITI)Charles UniversityPragueCzech Republic

Personalised recommendations