Learning curves of generic features maps for realistic datasets with a teacher-student model*
From MaRDI portal
Publication:5055409
Recommendations
- Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm
- Locality defeats the curse of dimensionality in convolutional teacher–student scenarios*
- Generalisation error in learning with random features and the hidden manifold model*
- Generalization from educated teachers
Cites work
- scientific article; zbMATH DE number 5278585 (Why is no real title available?)
- scientific article; zbMATH DE number 1273988 (Why is no real title available?)
- scientific article; zbMATH DE number 3244317 (Why is no real title available?)
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- An introduction to random matrices
- Benign overfitting in linear regression
- Concentration inequalities. A nonasymptotic theory of independence
- Concentration of measure and spectra of random matrices: applications to correlation matrices, elliptical distributions and beyond
- Convex analysis and monotone operator theory in Hilbert spaces
- Cox's regression model for counting processes: A large sample study
- Deterministic equivalents for certain functionals of large random matrices
- Eigenvectors of some large sample covariance matrix ensembles
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- High-dimensional asymptotics of prediction: ridge regression and classification
- High-dimensional probability. An introduction with applications in data science
- Information, Physics, and Computation
- Large scale analysis of generalization error in learning using margin based classification methods
- On robust regression with high-dimensional predictors
- On the convergence of the extremal eigenvalues of empirical covariance matrices with dependence
- Optimal rates for the regularized least-squares algorithm
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Probability
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Some inequalities for Gaussian processes and applications
- Statistical mechanics of learning
- Support vector machines learning noisy polynomial rules
- Surprises in high-dimensional ridgeless least squares interpolation
- The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
- The spectral norm of random inner-product kernel matrices
- The spectrum of kernel random matrices
- The spectrum of random inner-product kernel matrices
- Two models of double descent for weak features
- When do neural networks outperform kernel methods?*
Cited in
(8)- Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
- Locality defeats the curse of dimensionality in convolutional teacher–student scenarios*
- Learning curves of generic features maps for realistic datasets with a teacher-student model
- Free dynamics of feature learning processes
- An introduction to machine learning: a perspective from statistical physics
- Phase transition and higher order analysis of \(L_q\) regularization under dependence
- Universality of regularized regression estimators in high dimensions
- Debiasing convex regularized estimators and interval estimation in linear models
This page was built for publication: Learning curves of generic features maps for realistic datasets with a teacher-student model*
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5055409)