For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
From MaRDI portal
Publication:5873932
DOI10.1142/S0219530522400115OpenAlexW4309259897MaRDI QIDQ5873932FDOQ5873932
Authors: Akshay Rangamani, Lorenzo Rosasco, T. Poggio
Publication date: 10 February 2023
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530522400115
kernel regressionhigh-dimensional statisticsoverparameterizationalgorithmic stabilityminimum-norm interpolation
Cites Work
- Statistics for high-dimensional data. Methods, theory and applications.
- Support Vector Machines
- DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES
- 10.1162/153244302760200704
- Understanding machine learning. From theory to algorithms
- Interpolation of scattered data: distance matrices and conditionally positive definite functions
- Generalized Inversion of Modified Matrices
- Theory of Classification: a Survey of Some Recent Advances
- The spectrum of kernel random matrices
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Learnability, stability and uniform convergence
- A revisitation of formulae for the Moore-Penrose inverse of modified matrices.
- Just interpolate: kernel ``ridgeless regression can generalize
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Benign overfitting in linear regression
- Surprises in high-dimensional ridgeless least squares interpolation
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
Cited In (1)
This page was built for publication: For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5873932)