Abstract
Neural networks are a class of computational models that has recently raised the interest of the Artificial Intelligence community. One of the drawbacks of neural networks is the slow rate of convergence. This is even more true when we want to use them in significant applications, where the number of units has to be very high. Even the most powerful sequential computers haven’t capacity enough to solve in reasonable time most of real problems. Nevertheless, neural networks inherently posses a high degree of parallelism that allows them to be implemented easily on parallel architectures. In our work we investigated the possibility of mapping a generic neural network on a transputer system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Bibliography
B.Schätz, Konnektionismus und Schemaadaption, Diplomarbeit (Thesis) Technische Universität München, 1989
D.E.Rumelhart, J.L.MacLelland, Parallel Distributed Processing Vol.1 and 2, MIT, 1986
Parallel C User Guide, 3L Ltd, 1988
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1991 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Dorigo, M., Schätz, B., Sorrenti, D. (1991). On the Use of Transputers to Implement Neural Networks. In: Tzafestas, S.G. (eds) Engineering Systems with Intelligence. Microprocessor-Based and Intelligent Systems Engineering, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-2560-4_21
Download citation
DOI: https://doi.org/10.1007/978-94-011-2560-4_21
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-5130-9
Online ISBN: 978-94-011-2560-4
eBook Packages: Springer Book Archive