Nonlinear Perceptrons

  • Andrzej Bielecki
Part of the Studies in Computational Intelligence book series (SCI, volume 770)


In this chapter a training process of the most general class of perceptrons - the nonlinear ones - is considered. Runge-Kutta methods, first of all the gradient descent method (the Euler method), that are used as the numerical training algorithm, are studied in the context of their stability and robustness. It should be stressed that the continuous model of the training process is considered in the Euclidean space \(\mathbb {R}^n.\) The training algorithm is implemented as an iterative numerical rule in \(\mathbb {R}^n,\) as well. However, the theoretical analysis presented in this chapter concerns numerical schemata on the \(n-\)dimensional compact manifold \(\mathcal{M}_S^n,\) which is homeomorphic to the sphere \(\mathcal{S}^n.\) This is possible thanks to the specific compactification procedure, which is described in details in the Step 1 of the proof of Theorem 11.1. Such approach allows us to apply results concerning numerical dynamics on compact manifolds.

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Automation, Computer Science and Biomedical EngineeringAGH University of Science and TechnologyCracowPoland

Personalised recommendations