Abstract
In this chapter, I will discuss what a neuron is and what its components are. I will clarify the mathematical notation we will require and cover the many activation functions that are used today in neural networks. Gradient descent optimization will be discussed in detail, and the concept of learning rate and its quirks will be introduced. To make things a bit more fun, we will then use a single neuron to perform linear and logistic regression on real datasets. I will then discuss and explain how to implement the two algorithms with tensorflow.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
You can find a more extensive explanation of how numpy uses broadcasting in the official documentation, available at https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html .
- 2.
A contour line of a function is a curve along which the function has a constant value.
- 3.
Delve (Data for Evaluating Learning in Valid Experiments), “The Boston Housing Dataset,” www.cs.toronto.edu/~delve/data/boston/bostonDetail.html , 1996.
- 4.
A discussion of the meaning of cross-entropy is beyond the scope of this book. A nice introduction can be found at https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/ and in many introductory books on machine learning.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Umberto Michelucci
About this chapter
Cite this chapter
Michelucci, U. (2018). Single Neuron. In: Applied Deep Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3790-8_2
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3790-8_2
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3789-2
Online ISBN: 978-1-4842-3790-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)