Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
From MaRDI portal
Publication:2091833
DOI10.1214/22-AOS2181zbMath1500.62002arXiv1912.10754WikidataQ114060456 ScholiaQ114060456MaRDI QIDQ2091833
Publication date: 2 November 2022
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.10754
decision theoryleast squareslower boundsstatistical learning theorycovariance matricesanticoncentration
Linear regression; mixed models (62J05) Random matrices (probabilistic aspects) (60B20) Minimax procedures in statistical decision theory (62C20)
Related Items (4)
Non-asymptotic bounds for the \(\ell_{\infty}\) estimator in linear regression with uniform noise ⋮ Unnamed Item ⋮ An elementary analysis of ridge regression with random design ⋮ Suboptimality of constrained least squares and improvements via non-linear predictors
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Performance of empirical risk minimization in linear aggregation
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- The lower tail of random quadratic forms with applications to ordinary least squares
- Covariance estimation for distributions with \({2+\varepsilon}\) moments
- Random design analysis of ridge regression
- On higher order isotropy conditions and lower bounds for sparse quadratic forms
- Concentration inequalities and moment bounds for sample covariance operators
- Robust linear least squares regression
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- Model selection for regularized least-squares algorithm in learning theory
- Spectral analysis of large dimensional random matrices
- PAC-Bayesian stochastic model selection
- High-dimensional asymptotics of prediction: ridge regression and classification
- Über monotone Matrixfunktionen
- Robust regression: Asymptotics, conjectures and Monte Carlo
- A distribution-free theory of nonparametric regression
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Some PAC-Bayesian theorems
- Prediction in the worst case
- Bootstrapping and sample splitting for high-dimensional, assumption-lean inference
- Mean estimation and regression under heavy-tailed distributions: A survey
- Optimal rates for the regularized least-squares algorithm
- Inverse Littlewood-Offord theorems and the condition number of random discrete matrices
- On the singular values of random matrices
- Aggregation for Gaussian regression
- The Littlewood-Offord problem and invertibility of random matrices
- Lower bounds on the smallest eigenvalue of a sample covariance matrix.
- Sharp lower bounds on the least singular value of a random matrix without the fourth moment condition
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Learning without Concentration
- Concentration Inequalities
- Optimal Phase Transitions in Compressed Sensing
- Non-asymptotic theory of random matrices: extreme singular values
- Small Ball Probabilities for Linear Images of High-Dimensional Distributions
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
- Smallest singular value of a random rectangular matrix
- An Introduction to Random Matrices
- How Many Variables Should be Entered in a Regression Equation?
- Eigenvalues and Condition Numbers of Random Matrices
- The Hat Matrix in Regression and ANOVA
- Sample Covariance Matrices of Heavy-Tailed Distributions
- High-Dimensional Probability
- Competitive On-line Statistics
- Learning Theory and Kernel Machines
- From the Littlewood-Offord problem to the Circular Law: Universality of the spectral distribution of random matrices
- Small Ball Probability, Inverse Theorems, and Applications
- Robust Statistics
- Introduction to nonparametric estimation
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
This page was built for publication: Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices