The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
From MaRDI portal
Publication:6550964
Recommendations
- Precise statistical analysis of classification accuracies for adversarial training
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Theoretical investigation of generalization bounds for adversarial learning of deep neural networks
- Dimensionality Reduction, Regularization, and Generalization in Overparameterized Regressions
- Adversarial examples in random neural networks with general activations
Cites work
- scientific article; zbMATH DE number 4061904 (Why is no real title available?)
- A mean field view of the landscape of two-layer neural networks
- A model of double descent for high-dimensional binary linear classification
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- A random matrix approach to neural networks
- Analysis of a two-layer neural network via displacement convexity
- Deep learning: a statistical viewpoint
- DeepMoM: Robust Deep Learning With Median-of-Means
- Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung.
- Modern Coding Theory
- Nonlinear random matrix theory for deep learning
- On the Adversarial Robustness of Robust Estimators
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- On the robustness to adversarial corruption and to heavy-tailed data of the Stahel–Donoho median of means
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Precise statistical analysis of classification accuracies for adversarial training
- Provable tradeoffs in adversarially robust classification
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Surprises in high-dimensional ridgeless least squares interpolation
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- The implicit bias of gradient descent on separable data
- The spectrum of random inner-product kernel matrices
- Theoretical Insights Into the Optimization Landscape of Over-Parameterized Shallow Neural Networks
- Universality Laws for High-Dimensional Learning With Random Features
Cited in
(2)
This page was built for publication: The curse of overparametrization in adversarial training: precise analysis of robust generalization for random features regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6550964)