The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
From MaRDI portal
Publication:5072044
DOI10.1002/CPA.22008OpenAlexW3172995164MaRDI QIDQ5072044FDOQ5072044
Author name not available (Why is that?)
Publication date: 25 April 2022
Published in: Communications on Pure and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.05355
Cited In (40)
- Overparameterization and Generalization Error: Weighted Trigonometric Interpolation
- On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions
- The inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minima
- Stability of the scattering transform for deformations with minimal regularity
- A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
- The interpolation phase transition in neural networks: memorization and generalization under lazy training
- On the Benefit of Width for Neural Networks: Disappearance of Basins
- A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent*
- A Unifying Tutorial on Approximate Message Passing
- High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections
- The common intuition to transfer learning can win or lose: case studies for linear regression
- Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
- Theoretical issues in deep networks
- For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
- Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
- Fluctuations, bias, variance and ensemble of learners: exact asymptotics for convex losses in high-dimension
- Precise learning curves and higher-order scaling limits for dot-product kernel regression
- Redundant representations help generalization in wide neural networks
- The dynamics of representation learning in shallow, non-linear autoencoders
- Universality of regularized regression estimators in high dimensions
- Benign Overfitting and Noisy Features
- Large-dimensional random matrix theory and its applications in deep learning and wireless communications
- Precise statistical analysis of classification accuracies for adversarial training
- A simple probabilistic neural network for machine understanding
- High dimensional binary classification under label shift: phase transition and regularization
- Title not available (Why is that?)
- Double data piling: a high-dimensional solution for asymptotically perfect multi-category classification
- Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
- A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors
- Free dynamics of feature learning processes
- Deep networks for system identification: a survey
- Deep learning: a statistical viewpoint
- Title not available (Why is that?)
- SketchySGD: reliable stochastic optimization via randomized curvature estimates
- Universality of approximate message passing with semirandom matrices
- The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
- On the robustness of minimum norm interpolators and regularized empirical risk minimizers
- AdaBoost and robust one-bit compressed sensing
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
This page was built for publication: The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5072044)