Abstract
We’re still on the subject of gradient descent. But let’s now talk about gradient optimization, because of its importance to gradient descent. It is an optimization method for finding the minimum of a function, and it’s important in deep learning. It works to update the weights of the neural network through backpropagation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Hisham El-Amir and Mahmoud Hamdy
About this chapter
Cite this chapter
El-Amir, H., Hamdy, M. (2020). Improving Deep Neural Networks. In: Deep Learning Pipeline. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5349-6_10
Download citation
DOI: https://doi.org/10.1007/978-1-4842-5349-6_10
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-5348-9
Online ISBN: 978-1-4842-5349-6
eBook Packages: Professional and Applied ComputingProfessional and Applied Computing (R0)Apress Access Books