An Example of Neural Code: Neural Trees Implemented by LRAAMs

  • Alessandro Sperduti
  • Antonina Starita


In this paper we discuss a general method which allows to implement a Neural Tree in a high-order recurrent network, the Executor, using an extension of the RAAM model, the Labeling RAAM model (LRAAM). Neural Trees and LRAAMs are briefly reviewed and the Executor defined. A Neural Tree is encoded by a LRAAM, and the decoding part of the LRAAM used to control the dynamics of the Executor. The main aspect of this kind of technique is that the weights of the LRAAM can be considered as a neural code implementing the Neural Tree on the Executor. An example of the method is presented for the 8 bits parity problem.


Coder Network Tree Node Parity Problem Neural Code Linear Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth International Group, 1984.Google Scholar
  2. [2]
    R. Durbin and D. E. Rumelhart. Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Computation, 1: 133, 1989.CrossRefGoogle Scholar
  3. [3]
    L. Atlas et al. A performance comparison of trained multilayer perceptrons and trained classification trees. Proceedings of the IEEE, 78: 1614–1619, 1992.CrossRefGoogle Scholar
  4. [4]
    J. A. Feldman and D. H. Ballard. Connectionist models and their propertirs. Cognitive Science, 6: 205–254, 1982.CrossRefGoogle Scholar
  5. [5]
    C. L. Giles and T. Maxwell. Learning, invariance, and generalization in high-order neural networks. Applied Optics, 26: 4972–4978, 1987.CrossRefGoogle Scholar
  6. [6]
    T. Li, L. Fang, and A. Jennings. Structurally adaptive self-organizing neural trees. In International Joint Conference on Neural Networks, pages 329-334, 1992.Google Scholar
  7. [7]
    J. L. McClelland. Putting knowledge in its place. Cognitive Science, 9: 113–146, 1985.CrossRefGoogle Scholar
  8. [8]
    B. W. Mel. Connectionist Robot Motion Planning. Perspectives in Artificial Intelligence. Academic Press, 1990.Google Scholar
  9. [9]
    M. P. Perrone. A soft-competitive splitting rule for adaptive tree-structured neural networks. In International Joint Conference on Neural Networks, pages 689-693, 1992.Google Scholar
  10. [10]
    M. P. Perrone and N. Intrator. Unsupervised splitting rules for neural tree classifiers. In International Joint Conference on Neural Networks, pages 820-825, 1992.Google Scholar
  11. [11]
    J. B. Pollack. Recursive distributed representations. Artificial Intelligence, 46 (1–2):77–106, 1990.CrossRefGoogle Scholar
  12. [12]
    J. B. Pollack. The induction of dynamical recognizers. Machine Learning, 7: 227–252, 1991.Google Scholar
  13. [13]
    D. E. Rumelhart and J. L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, 1986.Google Scholar
  14. [14]
    A. Sankar and R. Mammone. Neural Tree Networks, pages 281-302. Neural Networks: Theory and Applications. Academic Press, 1991.Google Scholar
  15. [15]
    A. Sankar and R. Mammone. Optimal pruning of neural tree networks for improved generalization. In International Joint Conference on Neural Networks, pages 219-224, 1991.Google Scholar
  16. [16]
    A. Sankar and R. Mammone. Speaker independent vowel recognition using neural tree networks. In International Joint Conference on Neural Networks, pages 809-814, 1991.Google Scholar
  17. [17]
    I. K. Sethi. Entropy nets: From decision trees to neural networks. Proceeding of the IEEE, 78: 1605–1613, 1990.CrossRefGoogle Scholar
  18. [18]
    J. A. Sirat and J-P. Nadal. Neural trees: a new tool for classification. Network, 1: 423–438, 1990.MathSciNetCrossRefGoogle Scholar
  19. [19]
    A. Sperduti. Optimization and Functional Reduced Descriptors in Neural Networks. PhD thesis, Department of Computer Science, University of Pisa, 1993.Google Scholar
  20. [20]
    S. S. Venkatesh and P. Baldi. Programmed interactions in higher-order neural networks: Maximal capacity. Journal of Complexity, 7: 316–337, 1991.MathSciNetMATHCrossRefGoogle Scholar
  21. [21]
    D. J. Volper and S. E. Hampson. Quadratic function nodes: use, structure, and training. Neural Networks, 3: 93–108, 1990.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag/Wien 1993

Authors and Affiliations

  • Alessandro Sperduti
    • 1
  • Antonina Starita
    • 2
  1. 1.ICSIBerkeleyUSA
  2. 2.Dipartimento di InformaticaPisaItaly

Personalised recommendations