Skip to main content

A Matrix Library

  • Chapter
  • First Online:
  • 641 Accesses

Abstract

In the previous chapters, we presented an implementation of a neural network made of layers and neurons (i.e., instances of NeuronLayer and Neuron). Although instructive, that implementation does not reflect classical ways of implementing a neural network. A layer can be expressed as a matrix of weights and a vector of biases. This is how most libraries that build neural networks (e.g., TensorFlow and PyTorch) actually operate.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Alexandre Bergel​

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Bergel, A. (2020). A Matrix Library. In: Agile Artificial Intelligence in Pharo. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5384-7_6

Download citation

Publish with us

Policies and ethics