Abstract
In this chapter, you will look at a very important technique often used when training deep networks: regularization. You will look at techniques such as the ℓ2 and ℓ1 methods, dropout, and early stopping. You will see how these methods help avoid the problem of overfitting and achieve much better results from your models, when applied correctly. You will look at the mathematics behind the methods and at how to implement it in Python and TensorFlow correctly.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In a statistical-classification problem with two classes, a decision boundary, or decision surface, is a surface that partitions the underlying space into two sets, one for each class. (Source: Wikipedia, https://goo.gl/E5nELL ).
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Umberto Michelucci
About this chapter
Cite this chapter
Michelucci, U. (2018). Regularization. In: Applied Deep Learning. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3790-8_5
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3790-8_5
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3789-2
Online ISBN: 978-1-4842-3790-8
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)