Massively Parallel Training of Multi Layer Perceptrons With Irregular Topologies
In this paper we present an approach to the training of feed forward neural networks on massively parallel SIMD-architectures. In order to cover a wide field of applications we focus our attention on the flexibility of the load balancing routines. Our approach is characterized by three important properties: 1. All four types of parallelism inherent in the training phase are used. 2. In a preprocessing step neural networks are transformed into equivalent topologies, more suited for parallel computation. 3. Each learning task can be parallelized in a number of different ways, the best of which is chosen according to estimations of the computing efficiency.
Following these concepts we developed PINK2, a massively parallel simulator kernel for the MasPar MP1216. In contrast to most known approaches, efficient only for special topologies, it achieves good computing performance on a broad range of differing benchmark problems.
KeywordsLoad Balance Learning Problem Parallel Representation Connection Level Parallel Training
Unable to display preview. Download preview PDF.
- X. Zhang, M. Mckenna, J.P. Mesirov, D.L. Waltz: NIPS 2, 801 (1990)Google Scholar
- N. Mache, Master thesis, University of Stuttgart, 1992Google Scholar
- I. Pitas (ed), A. Petrowski, H. Paugam-Moisy: Parallel Algorithms pp 259–328 Wiley 1993Google Scholar
- S.E. Fahlmann: CMU-CS-88-162 (1988)Google Scholar
- M. Riedmiller, H. Braun: Proc. ICNN ’93, 379 (1993)Google Scholar
- H. Braun, J. Weisbrod: Proc. ICANNGA ’93, 25 (1993)Google Scholar
- J. Schaefer, H. Braun: Proc. ICANNGA ’95Google Scholar
- H. Braun, P. Zagorski: Proc. 3rd PPSN, 444 (1994)Google Scholar
- A. Zell: Simulation Neuronaler Netze, Addison Wesley 1994Google Scholar