In this repo I'm trying to reproduce some double descent results from several papers:
- Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle: https://arxiv.org/abs/2303.14151
- Deep Double Descent: Where Bigger Models and More Data Hurt: https://arxiv.org/abs/1912.02292
- High-dimensional analysis of double descent for linear regression with random projections: https://arxiv.org/abs/2303.01372
- More Data Can Hurt for Linear Regression: Sample-wise Double Descent: https://arxiv.org/abs/1912.07242
- A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning: https://arxiv.org/abs/2310.18988
- Reconciling modern machine learning practice and the bias-variance trade-off: https://arxiv.org/abs/1812.11118
- Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime: https://arxiv.org/abs/2003.01054
- The generalization error of random features regression: Precise asymptotics and double descent curve: https://arxiv.org/abs/1908.05355
- Triple descent and the two kinds of overfitting: Where & why do they appear? : https://arxiv.org/abs/2006.03509`
Nothing particularly useful here (unless you're interested in double descent).
Reproducing polynomial regression results from the Double Descent Demystified paper:
Underparameterized regime | Interpolation threshold | Overparameterized regime |
---|---|---|
One of the most fervid claims made by modern-day DL researchers was always that "bigger models work better!!". This conflicts with standard statistical learning theory wisdom whose prediction was that bigger models would overfit on training data, interpolate noise and fail to generalize.
Who's right? Enter double descent.
Double descent describes the phenomenon where the error curve of a model as a function of model complexity or size doesn't follow the traditional bias-variance tradeoff U-shaped curve. Instead, after an initial descent (error reduction) and subsequent ascent (error increase due to overfitting), there is a second descent in error even as model complexity continues to grow beyond the interpolation threshold.
One point for modern DL folks (although this doesn't necessarily contradict classic bias-variance).
Given a model's complexity represented by the number of parameters
Assume a scenario where you fit a polynomial regression model:
where
As the degree of the polynomial (akin to model complexity) increases, the fit to the training data becomes perfect when the degree