Ridge regression and asymptotic minimax estimation over spheres of growing dimension
DOI10.3150/14-BEJ609zbMATH Open1388.62205arXiv1601.03900OpenAlexW3105622673MaRDI QIDQ5963493FDOQ5963493
Publication date: 22 February 2016
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1601.03900
Nonparametric regression and quantile regression (62G08) Linear regression; mixed models (62J05) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Minimax procedures in statistical decision theory (62C20)
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Title not available (Why is that?)
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Strong convergence of the empirical distribution of eigenvalues of large dimensional random matrices
- Sparsity oracle inequalities for the Lasso
- Scaled sparse linear regression
- DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Statistical decision theory and Bayesian analysis. 2nd ed
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Local operator theory, random matrices and Banach spaces.
- Spectrum estimation for large dimensional covariance matrices using random matrix theory
- Amenability: A survey for statistical applications of hunt-stein and related conditions on groups
- Convergence rate of expected spectral distributions of large random matrices. II: Sample covariance matrices
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- A dozen de Finetti-style results in search of a theory
- Minimax estimation of the mean of a normal distribution when the parameter space is restricted
- Inequalities for the gamma function
- Optimal filtering of square-integrable signals in Gaussian noise
- Title not available (Why is that?)
- Information inequalities for the Bayes risk
- Variance estimation in high-dimensional linear models
- Some inequalities satisfied by the quantities of information of Fisher and Shannon
- A proof of the Fisher information inequality via a data processing argument
- On minimax filtering over ellipsoids
- Ridge Regression and James-Stein Estimation: Review and Comments
- Conditional predictive inference post model selection
- Inadmissibility of maximum likelihood estimators in some multiple regression problems with three or more independent variables
- An ancillarity paradox which appears in multiple linear regression
- Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance
- How Many Variables Should be Entered in a Regression Equation?
- Estimation of a multivariate mean with constraints on the norm
- Title not available (Why is that?)
- Convergence Rates of Spectral Distributions of Large Sample Covariance Matrices
- Admissible Estimators, Recurrent Diffusions, and Insoluble Boundary Value Problems
- Modified Bessel functions and their applications in probability and statistics
- Optimal prediction for linear regression with infinitely many parameters.
- Title not available (Why is that?)
- Adaptive prediction and estimation in linear regression with infinitely many parameters.
- Quasilinear estimates of signals in \(L_ 2\)
- Information inequality bounds on the minimax risk (with an application to nonparametric regression)
Cited In (18)
- Ridge Regression Under Dense Factor Augmented Models
- Title not available (Why is that?)
- High-dimensional asymptotics of prediction: ridge regression and classification
- High-Dimensional Factor Regression for Heterogeneous Subpopulations
- On the accuracy in high‐dimensional linear models and its application to genomic selection
- Adaptive estimation in multivariate response regression with hidden variables
- Universality of regularized regression estimators in high dimensions
- On the Behavior of the Risk of a LASSO-Type Estimator
- On the asymptotic risk of ridge regression with many predictors
- WONDER: Weighted one-shot distributed ridge regression in high dimensions
- A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors
- In Nonparametric and High-Dimensional Models, Bayesian Ignorability is an Informative Prior
- Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Significance testing in non-sparse high-dimensional linear models
- Smoothly varying regularization
- Surprises in high-dimensional ridgeless least squares interpolation
- Cleaning large correlation matrices: tools from random matrix theory
Recommendations
- Asymptotic Aspects of Ordinary Ridge Regression 👍 👎
- Ridge regression estimators in the linear regression models with non-spherical errors 👍 👎
- Theory of Ridge Regression Estimation with Applications 👍 👎
- Title not available (Why is that?) 👍 👎
- Title not available (Why is that?) 👍 👎
- Ridge Estimation under the Stochastic Restriction 👍 👎
- High-dimensional asymptotics of prediction: ridge regression and classification 👍 👎
This page was built for publication: Ridge regression and asymptotic minimax estimation over spheres of growing dimension
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5963493)