scientific article; zbMATH DE number 7626737
From MaRDI portal
Publication:5053228
Daniel Hsu, Mikhail Belkin, Adhyyan Narang, Vidya Muthukumar, Vignesh Subramanian, Anant Sahai
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/2005.08054
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation, Surprises in high-dimensional ridgeless least squares interpolation, Newton-MR: inexact Newton method with minimum residual sub-problem solver, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, AdaBoost and robust one-bit compressed sensing
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Hanson-Wright inequality and sub-Gaussian concentration
- Some inequalities for Gaussian processes and applications
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Adaptive estimation of a quadratic functional by model selection.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Asymptotics of empirical eigenstructure for high dimensional spiked covariance
- Surprises in high-dimensional ridgeless least squares interpolation
- On the doubt about margin explanation of boosting
- High-dimensional generalized linear models and the lasso
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- High-Dimensional Statistics
- 10.1162/153244303321897690
- 10.1162/1532443041827925
- Two Models of Double Descent for Weak Features
- On the proliferation of support vectors in high dimensions*
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Benign overfitting in linear regression
- A model of double descent for high-dimensional binary linear classification
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Elements of Information Theory
- Convexity, Classification, and Risk Bounds
- Comparing dynamics: deep neural networks versus glassy systems