An elementary analysis of ridge regression with random design
From MaRDI portal
Publication:2080945
DOI10.5802/CRMATH.367OpenAlexW4297998635MaRDI QIDQ2080945FDOQ2080945
Authors: Jaouad Mourtada, Lorenzo Rosasco
Publication date: 12 October 2022
Published in: Comptes Rendus. Mathématique. Académie des Sciences, Paris (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2203.08564
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Random matrices (probabilistic aspects) (60B20)
Cites Work
- Theory of Reproducing Kernels
- Nonparametric stochastic approximation with large step-sizes
- Support Vector Machines
- Title not available (Why is that?)
- On early stopping in gradient descent learning
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Local Rademacher complexities
- User-friendly tail bounds for sums of random matrices
- Some applications of concentration inequalities to statistics
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- Performance of empirical risk minimization in linear aggregation
- Trace inequalities and quantum entropy: an introductory course
- 10.1162/1532443041424337
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Random vectors in the isotropic position
- Learning from examples as an inverse problem
- Random design analysis of ridge regression
- Model selection for regularized least-squares algorithm in learning theory
- Sums of random Hermitian matrices and an inequality by Rudelson
- The lower tail of random quadratic forms with applications to ordinary least squares
- Non commutative Khintchine and Paley inequalities
- Strong converse for identification via quantum channels
- Concentration inequalities and moment bounds for sample covariance operators
- Optimal rates for regularization of statistical inverse learning problems
- Distribution-free robust linear regression
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Benign overfitting in linear regression
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Title not available (Why is that?)
Cited In (3)
This page was built for publication: An elementary analysis of ridge regression with random design
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2080945)