Extended Random Neural Networks
Random neural networks mimic at a very deep level the biological nervous system. However, it is difficult to meet during learning the biological constraints imposed on their parameters. In the paper two possible extensions are proposed in order to remove this difficulty. Moreover, the proposed learning algorithm is tailored to the specific architecture in order to reduce the computational cost. Two architectures are considered and illustrated by simulation tests.
KeywordsBimodal neuron Recurrent architecture
Unable to display preview. Download preview PDF.
- 1.Gerstner, W., van Hemmen, J.L.: Coding and information processing in neural networks. In: Domany, E., van Hemmen, J.L., Schulten, K. (eds.): Models of Neural Networks II. Springer-Verlag, Berlin Heidelberg New York (1994) 1–118Google Scholar
- 5.Gelembe, E., Stafylopatis, A., Likas, A.: Associative memory operation of the random network model. In Proceedings Int. Conf. Artificial Neural Networks. Helsinki, Finland (1991) 307–312Google Scholar
- 6.Gelembe, E., Koubi, V., Pekergin, F.: Dynamical random neural network approach to the traveling salesman problem. In Proceedings IEEE Symp. Sist., Man, Cybern. (1993) 630–635Google Scholar
- 7.Ghanwani, A.: A qualitative comparison of neural network models applied to the vertex covering problem. Elektrik, 2(1) (1994) 11–18Google Scholar