The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve

From MaRDI portal
Publication:5072044

DOI10.1002/cpa.22008OpenAlexW3172995164MaRDI QIDQ5072044

No author found.

Publication date: 25 April 2022

Published in: Communications on Pure and Applied Mathematics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1908.05355




Related Items (29)

Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural NetworksDouble Double Descent: On Generalization Errors in Transfer Learning between Linear Regression TasksDeep learning: a statistical viewpointFit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationA precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersTheoretical issues in deep networksThe inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minimaOverparameterization and Generalization Error: Weighted Trigonometric InterpolationOn the Benefit of Width for Neural Networks: Disappearance of BasinsHigh dimensional binary classification under label shift: phase transition and regularizationLarge-dimensional random matrix theory and its applications in deep learning and wireless communicationsOn the Inconsistency of Kernel Ridgeless Regression in Fixed DimensionsFree dynamics of feature learning processesA Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear PredictorsStability of the scattering transform for deformations with minimal regularityUniversality of approximate message passing with semirandom matricesHigh-Dimensional Analysis of Double Descent for Linear Regression with Random ProjectionsUniversality of regularized regression estimators in high dimensionsBenign Overfitting and Noisy FeaturesUnnamed ItemUnnamed ItemPrecise statistical analysis of classification accuracies for adversarial trainingOn the robustness of minimum norm interpolators and regularized empirical risk minimizersAdaBoost and robust one-bit compressed sensingA Unifying Tutorial on Approximate Message PassingA phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networksThe interpolation phase transition in neural networks: memorization and generalization under lazy trainingA random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent*For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability




This page was built for publication: The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve