Learning curves of generic features maps for realistic datasets with a teacher-student model*
From MaRDI portal
Publication:5055409
DOI10.1088/1742-5468/AC9825OpenAlexW3171395320MaRDI QIDQ5055409FDOQ5055409
Authors: Bruno Loureiro, Cédric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mézard, Lenka Zdeborová
Publication date: 13 December 2022
Published in: Journal of Statistical Mechanics: Theory and Experiment (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.08127
Recommendations
- Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm
- Locality defeats the curse of dimensionality in convolutional teacher–student scenarios*
- Generalisation error in learning with random features and the hidden manifold model*
- Generalization from educated teachers
Cites Work
- Convex analysis and monotone operator theory in Hilbert spaces
- Title not available (Why is that?)
- High-dimensional probability. An introduction with applications in data science
- Concentration inequalities. A nonasymptotic theory of independence
- Cox's regression model for counting processes: A large sample study
- Title not available (Why is that?)
- Eigenvectors of some large sample covariance matrix ensembles
- Optimal rates for the regularized least-squares algorithm
- An introduction to random matrices
- Title not available (Why is that?)
- Some inequalities for Gaussian processes and applications
- Concentration of measure and spectra of random matrices: applications to correlation matrices, elliptical distributions and beyond
- On the convergence of the extremal eigenvalues of empirical covariance matrices with dependence
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Deterministic equivalents for certain functionals of large random matrices
- Information, Physics, and Computation
- The spectrum of kernel random matrices
- On robust regression with high-dimensional predictors
- Statistical mechanics of learning
- Support vector machines learning noisy polynomial rules
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Probability
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- The spectral norm of random inner-product kernel matrices
- The spectrum of random inner-product kernel matrices
- High-dimensional asymptotics of prediction: ridge regression and classification
- Benign overfitting in linear regression
- The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
- Surprises in high-dimensional ridgeless least squares interpolation
- Two models of double descent for weak features
- When do neural networks outperform kernel methods?*
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- Large scale analysis of generalization error in learning using margin based classification methods
Cited In (6)
- An introduction to machine learning: a perspective from statistical physics
- Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
- Universality of regularized regression estimators in high dimensions
- Free dynamics of feature learning processes
- Debiasing convex regularized estimators and interval estimation in linear models
- Phase transition and higher order analysis of \(L_q\) regularization under dependence
Uses Software
This page was built for publication: Learning curves of generic features maps for realistic datasets with a teacher-student model*
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5055409)