Second-order Stein: SURE for SURE and other applications in high-dimensional inference
DOI10.1214/20-AOS2005zbMATH Open1486.62209arXiv1811.04121OpenAlexW3204172662MaRDI QIDQ2054467FDOQ2054467
Authors: Yanyan Li
Publication date: 3 December 2021
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.04121
Recommendations
- On Stein's unbiased risk estimate for reduced rank estimators
- scientific article; zbMATH DE number 1048002
- On unbiased and improved loss estimation for the mean of a multivariate normal distribution with unknown variance.
- From multiple Gaussian sequences to functional data and beyond: a Stein estimation approach
- The high dimensional statistical analysis of Lasso with second moment noise
model selectionregressionelastic netStein's formulavariance estimaterisk estimatedebiased estimationSURE for SUREvariance of model size
Multivariate distribution of statistics (62H10) Nonparametric tolerance and confidence regions (62G15) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Robustness and adaptive procedures (parametric inference) (62F35)
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- Weak convergence and empirical processes. With applications to statistics
- Statistics for high-dimensional data. Methods, theory and applications.
- Some Comments on C P
- Estimation of the mean of a multivariate normal distribution
- Sharp oracle inequalities for aggregation of affine estimators
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional graphs and variable selection with the Lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Title not available (Why is that?)
- Kullback-Leibler aggregation and misspecified generalized linear models
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Deviation optimal learning using greedy \(Q\)-aggregation
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Honest confidence regions for nonparametric regression
- Confidence sets in sparse regression
- Scaled sparse linear regression
- Statistical significance in high-dimensional linear models
- Degrees of freedom in lasso problems
- Inference on treatment effects after selection among high-dimensional controls
- Title not available (Why is that?)
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Sparse matrix inversion with scaled Lasso
- Just relax: convex programming methods for identifying sparse signals in noise
- Adaptive estimation of a quadratic functional by model selection.
- The Lasso problem and uniqueness
- Concentration inequalities. A nonasymptotic theory of independence
- Information Theory and Mixing Least-Squares Regressions
- Normal Approximation by Stein’s Method
- High-dimensional regression with unknown variance
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Convex functions and their applications. A contemporary approach
- Title not available (Why is that?)
- Title not available (Why is that?)
- Analysis and geometry of Markov diffusion operators
- Mean field models for spin glasses. Volume I: Basic examples.
- Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators
- Pivotal estimation via square-root lasso in nonparametric regression
- A short survey of Stein's method
- Ordered linear smoothers
- Sparse estimation by exponential weighting
- Title not available (Why is that?)
- The degrees of freedom of the Lasso for general design matrix
- Aggregation of affine estimators
- Regularization and the small-ball method. I: Sparse recovery
- Slope meets Lasso: improved oracle bounds and optimality
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Optimal bounds for aggregation of affine estimators
- Bounds on the prediction error of penalized least squares estimators with convex penalty
- Newton-Stein method: an optimization method for GLMs via Stein's lemma
- Excess optimism: how biased is the apparent error of an estimator tuned by SURE?
Cited In (11)
- Asymptotic normality of robust \(M\)-estimators with convex penalty
- Noise covariance estimation in multi-task high-dimensional linear models
- Stein's identities and the related topics: an instructive explanation on shrinkage, characterization, normal approximation and goodness-of-fit
- High-dimensional asymptotics of likelihood ratio tests in the Gaussian sequence model under convex constraints
- Universality of regularized regression estimators in high dimensions
- Degrees of freedom for piecewise Lipschitz estimators
- Stein's method for negatively associated random variables with applications to second-order stationary random fields
- Inadmissibility of the corrected Akaike information criterion
- Debiasing convex regularized estimators and interval estimation in linear models
- De-biasing the Lasso with degrees-of-freedom adjustment
- The Lasso with general Gaussian designs with applications to hypothesis testing
This page was built for publication: Second-order Stein: SURE for SURE and other applications in high-dimensional inference
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2054467)