Abstract
In the previous chapters, we presented an implementation of a neural network made of layers and neurons (i.e., instances of NeuronLayer and Neuron). Although instructive, that implementation does not reflect classical ways of implementing a neural network. A layer can be expressed as a matrix of weights and a vector of biases. This is how most libraries that build neural networks (e.g., TensorFlow and PyTorch) actually operate.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsAuthor information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Alexandre Bergel
About this chapter
Cite this chapter
Bergel, A. (2020). A Matrix Library. In: Agile Artificial Intelligence in Pharo. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5384-7_6
Download citation
DOI: https://doi.org/10.1007/978-1-4842-5384-7_6
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-5383-0
Online ISBN: 978-1-4842-5384-7
eBook Packages: Professional and Applied ComputingProfessional and Applied Computing (R0)Apress Access Books