Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*
From MaRDI portal
Publication:5055412
DOI10.1088/1742-5468/AC9829OpenAlexW3213347999MaRDI QIDQ5055412FDOQ5055412
Authors: Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová
Publication date: 13 December 2022
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.15004
Recommendations
- Just interpolate: kernel ``ridgeless regression can generalize
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Regularization in kernel learning
- Generalization bounds of a compressed regression learning algorithm
- Learning with generalization capability by kernel methods of bounded complexity
Cites Work
- Scikit-learn: machine learning in Python
- Title not available (Why is that?)
- Title not available (Why is that?)
- Acceleration of Stochastic Approximation by Averaging
- Title not available (Why is that?)
- Eigenvectors of some large sample covariance matrix ensembles
- Optimal rates for the regularized least-squares algorithm
- Concentration of measure and isoperimetric inequalities in product spaces
- Bayesian learning for neural networks
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
- Support vector machines learning noisy polynomial rules
- Sobolev norm learning rates for regularized least-squares algorithms
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- High-dimensional dynamics of generalization error in neural networks
- High-dimensional asymptotics of prediction: ridge regression and classification
- Benign overfitting in linear regression
- Title not available (Why is that?)
- Surprises in high-dimensional ridgeless least squares interpolation
- Two models of double descent for weak features
- When do neural networks outperform kernel methods?*
- Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm
Cited In (2)
Uses Software
This page was built for publication: Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5055412)