Universality of regularized regression estimators in high dimensions
DOI10.1214/23-AOS2309arXiv2206.07936MaRDI QIDQ6183759FDOQ6183759
Authors: Qiyang Han, Yandi Shen
Publication date: 4 January 2024
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2206.07936
Recommendations
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Fundamental barriers to high-dimensional regression with convex penalties
- The Lasso with general Gaussian designs with applications to hypothesis testing
- Asymptotics for high dimensional regression \(M\)-estimates: fixed design results
Lassoridge regressionrandom matrix theoryrobust regressionhigh-dimensional asymptoticsuniversalityGaussian comparison inequalitiesLindeberg's principle
Random matrices (probabilistic aspects) (60B20) Approximations to statistical distributions (nonasymptotic) (62E17) Functional limit theorems; invariance principles (60F17)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- Estimation of the mean of a multivariate normal distribution
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Robust Estimation of a Location Parameter
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Title not available (Why is that?)
- Mean Field Models for Spin Glasses
- Title not available (Why is that?)
- Mean field models for spin glasses. Volume I: Basic examples.
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- The LASSO Risk for Gaussian Matrices
- A generalization of the Lindeberg principle
- Universality in polytope phase transitions and message passing algorithms
- The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- On robust regression with high-dimensional predictors
- Applications of the Lindeberg Principle in Communications and Statistical Learning
- Ridge regression and asymptotic minimax estimation over spheres of growing dimension
- Fundamental limits of symmetric low-rank matrix estimation
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- On the impact of predictor geometry on the performance on high-dimensional ridge-regularized generalized robust regression estimators
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Second-order Stein: SURE for SURE and other applications in high-dimensional inference
- Universality laws for randomized dimension reduction, with applications
- High-dimensional central limit theorems by Stein's method
- High-dimensional asymptotics of prediction: ridge regression and classification
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Universality of approximate message passing algorithms
- The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning
- Optimal errors and phase transitions in high-dimensional generalized linear models
- De-biasing the Lasso with degrees-of-freedom adjustment
- Surprises in high-dimensional ridgeless least squares interpolation
- Fundamental barriers to high-dimensional regression with convex penalties
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- The Generalization Error of Random Features Regression: Precise Asymptotics and the Double Descent Curve
- Does SLOPE outperform bridge regression?
- Central limit theorem and bootstrap approximation in high dimensions: near \(1/\sqrt{n}\) rates via implicit smoothing
- A model of double descent for high-dimensional binary linear classification
- Nearly optimal central limit theorem and bootstrap approximations in high dimensions
- Debiasing convex regularized estimators and interval estimation in linear models
- Approximate message passing algorithms for rotationally invariant matrices
- Generalisation error in learning with random features and the hidden manifold model*
- Mean field asymptotics in high-dimensional statistics: from exact results to efficient algorithms
- Learning curves of generic features maps for realistic datasets with a teacher-student model*
Cited In (4)
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Regularization after retention in ultrahigh dimensional linear regression models
- Regularized parameter estimation of high dimensional distribution
- Approximate message passing with rigorous guarantees for pooled data and quantitative group testing
This page was built for publication: Universality of regularized regression estimators in high dimensions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183759)