The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
From MaRDI portal
Publication:5072044
DOI10.1002/cpa.22008OpenAlexW3172995164MaRDI QIDQ5072044
No author found.
Publication date: 25 April 2022
Published in: Communications on Pure and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.05355
Related Items (29)
Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks ⋮ Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks ⋮ Deep learning: a statistical viewpoint ⋮ Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation ⋮ A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers ⋮ Theoretical issues in deep networks ⋮ The inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minima ⋮ Overparameterization and Generalization Error: Weighted Trigonometric Interpolation ⋮ On the Benefit of Width for Neural Networks: Disappearance of Basins ⋮ High dimensional binary classification under label shift: phase transition and regularization ⋮ Large-dimensional random matrix theory and its applications in deep learning and wireless communications ⋮ On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions ⋮ Free dynamics of feature learning processes ⋮ A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors ⋮ Stability of the scattering transform for deformations with minimal regularity ⋮ Universality of approximate message passing with semirandom matrices ⋮ High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections ⋮ Universality of regularized regression estimators in high dimensions ⋮ Benign Overfitting and Noisy Features ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Precise statistical analysis of classification accuracies for adversarial training ⋮ On the robustness of minimum norm interpolators and regularized empirical risk minimizers ⋮ AdaBoost and robust one-bit compressed sensing ⋮ A Unifying Tutorial on Approximate Message Passing ⋮ A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks ⋮ The interpolation phase transition in neural networks: memorization and generalization under lazy training ⋮ A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent* ⋮ For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
This page was built for publication: The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve