For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
From MaRDI portal
Publication:5873932
Cites work
- 10.1162/153244302760200704
- A revisitation of formulae for the Moore-Penrose inverse of modified matrices.
- Benign overfitting in linear regression
- DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES
- Generalized Inversion of Modified Matrices
- Interpolation of scattered data: distance matrices and conditionally positive definite functions
- Just interpolate: kernel ``ridgeless regression can generalize
- Learnability, stability and uniform convergence
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Statistics for high-dimensional data. Methods, theory and applications.
- Support Vector Machines
- Surprises in high-dimensional ridgeless least squares interpolation
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- The spectrum of kernel random matrices
- Theory of Classification: a Survey of Some Recent Advances
- Understanding machine learning. From theory to algorithms
This page was built for publication: For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5873932)