Abstract
The majority of neural net models presented in the literature focus mainly in the neural structure of nets, leaving aside many details about synapses and dendrites. This can be very reductionist if we want to approach our model to real neural nets. These structures tend to be very elaborate, and are able to process information in very complex ways (see [MEL 94] for details).
We will introduce a new model, the S-Net (Synaptic-Net), in order to represent neural nets with special emphasis on synaptic and dendritic transmission. First, we present the supporting mathematical structure of S-Nets, initially inspired on Petri-Net formalism, adding a transition to transition connection type. There are two main components of S-Nets, neurones and synaptic/dendritic units (s/d units). All activation values are integers. Neurones are similar to McCulloch-Pitts neurones, and s/d units will process information within certain class of functions.
S-Nets are able to represent spatial nets representations in a very natural way. We can easily modulate the length of an axon, the connection or branching of two dendrites or synaptic connections. Some examples are shown.
Next, the focus will be on what kind of functions are suited to s/d units. We will present three function types: sum, maximization and simple negation (changing an excitatory impulse to an inhibitory one, or vice-versa). With these functions for s/d units and with simple neurones, we will prove that all recursive functions can be computed by at least one specific S-Net. In order to achieve this, we will use the Register Machine, and show a way to build for each symbolic program, a S-Net capable of computing the function defined for that specific program. It is shown a simple application example. This computational power will be achieved without any use of synaptic weights (i.e., all weights are one as in McCulloch's model) or neurones activation values (i.e., all values are set to zero).
Finally, some aspects for future investigation are presented, namely, the possibility of synaptic-synaptic connections, how can noise be handled, and some other features intended to approach this mathematical model to our reality.
This work was supported by JNICT PBIC/TIT/2527/95
Preview
Unable to display preview. Download preview PDF.
Bibliography
ANDERSON, J. (1995). An Introduction to Neural Networks. Massachusetts Institute of Technology.
ARBIB, M (1989). The Metaphorical Brain 2. Neural Networks and Beyond. John Wiley & Sons.
CUTLAND, N. (1988). Computability. An Introduction to Recursive Function Theory. Cambridge University Press.
McCULLOCH, W.; PITTS, W.(1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, pp.115–33.
MEL, B. (1994). “Information Processing in Dendritic Trees”. Neural Computation, 6, pp. 1031–85. Massachusetts Institute of Technology.
SHEPHERD, G. (1994). Neurobiology. Oxford University Press.
SIEGELMANN, H.; SONTAG, E. (1995). “On the Computational Power of Neural Nets”, in Journal of Computer and System Sciences, Vol. 50, No.1. Academic Press.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Neto, J.P., Costa, J.F., Coelho, H. (1997). Lower bounds of computational power of a synaptic calculus. In: Mira, J., Moreno-Díaz, R., Cabestany, J. (eds) Biological and Artificial Computation: From Neuroscience to Technology. IWANN 1997. Lecture Notes in Computer Science, vol 1240. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032492
Download citation
DOI: https://doi.org/10.1007/BFb0032492
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63047-0
Online ISBN: 978-3-540-69074-0
eBook Packages: Springer Book Archive