Technique
Regularization
Techniques to prevent overfitting and improve generalization
What is Regularization?
Regularization refers to techniques used to prevent overfitting in machine learning models by adding a penalty term to the loss function or modifying the training process. This encourages simpler models that generalize better to unseen data.
Key Points
1
Prevents overfitting
2
Adds penalty for complexity
3
L1, L2, and elastic net variants
4
Improves model generalization
Practical Examples
L2 regularization (Ridge)
L1 regularization (Lasso)
Dropout
Early stopping