Definition :
According to Goodfellow et al. (2016): “Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error.”
Regularization techniques :
- Data pre-Processing :
- Dataset Augmentation.
- Constraints :
- L¹/L ² regularization.
- Early stopping.
- Additional layers :
- Dropout.
- Max-Norm Regularization.
- Batch Normalization.
- Layer Normalization.
Data pre-Processing :
we can avoid model overfitting by applying some modifications to the data.
1- Dataset Augmentation:
The bigger the data the more generalized model we get, but in some cases collecting more data cost much. There are some techniques we can use to generate more data from the data we have. these techniques are called Data Agumentation. we will discuss some Data Agumentation teques that can be a