Second-order Stein: SURE for SURE and other applications in high-dimensional inference
From MaRDI portal
Publication:2054467
Abstract: Stein's formula states that a random variable of the form is mean-zero for functions with integrable gradient. Here, is the divergence of the function and is a standard normal vector. This paper aims to propose a Second Order Stein formula to characterize the variance of such random variables for all functions with square integrable gradient, and to demonstrate the usefulness of this formula in various applications. In the Gaussian sequence model, a consequence of Stein's formula is Stein's Unbiased Risk Estimate (SURE), an unbiased estimate of the mean squared risk for almost any estimator of the unknown mean. A first application of the Second Order Stein formula is an Unbiased Risk Estimate for SURE itself (SURE for SURE): an unbiased estimate {providing} information about the squared distance between SURE and the squared estimation error of . SURE for SURE has a simple form as a function of the data and is applicable to all with square integrable gradient, e.g. the Lasso and the Elastic Net. In addition to SURE for SURE, the following applications are developed: (1) Upper bounds on the risk of SURE when the estimation target is the mean squared error; (2) Confidence regions based on SURE; (3) Oracle inequalities satisfied by SURE-tuned estimates; (4) An upper bound on the variance of the size of the model selected by the Lasso; (5) Explicit expressions of SURE for SURE for the Lasso and the Elastic-Net; (6) In the linear model, a general semi-parametric scheme to de-bias a differentiable initial estimator for inference of a low-dimensional projection of the unknown , with a characterization of the variance after de-biasing; and (7) An accuracy analysis of a Gaussian Monte Carlo scheme to approximate the divergence of functions .
Recommendations
- On Stein's unbiased risk estimate for reduced rank estimators
- scientific article; zbMATH DE number 1048002
- On unbiased and improved loss estimation for the mean of a multivariate normal distribution with unknown variance.
- From multiple Gaussian sequences to functional data and beyond: a Stein estimation approach
- The high dimensional statistical analysis of Lasso with second moment noise
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 4056770 (Why is no real title available?)
- scientific article; zbMATH DE number 3438144 (Why is no real title available?)
- scientific article; zbMATH DE number 1444745 (Why is no real title available?)
- scientific article; zbMATH DE number 6438182 (Why is no real title available?)
- A general theory of concave regularization for high-dimensional sparse estimation problems
- A short survey of Stein's method
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Adaptive estimation of a quadratic functional by model selection.
- Aggregation of affine estimators
- Analysis and geometry of Markov diffusion operators
- Bounds on the prediction error of penalized least squares estimators with convex penalty
- Concentration inequalities. A nonasymptotic theory of independence
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Confidence sets in sparse regression
- Convex functions and their applications. A contemporary approach
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Degrees of freedom in lasso problems
- Deviation optimal learning using greedy \(Q\)-aggregation
- Estimation of the mean of a multivariate normal distribution
- Excess optimism: how biased is the apparent error of an estimator tuned by SURE?
- High-dimensional graphs and variable selection with the Lasso
- High-dimensional regression with unknown variance
- Honest confidence regions for nonparametric regression
- Inference on treatment effects after selection among high-dimensional controls
- Information Theory and Mixing Least-Squares Regressions
- Just relax: convex programming methods for identifying sparse signals in noise
- Kullback-Leibler aggregation and misspecified generalized linear models
- Mean field models for spin glasses. Volume I: Basic examples.
- Nearly unbiased variable selection under minimax concave penalty
- Newton-Stein method: an optimization method for GLMs via Stein's lemma
- Normal Approximation by Stein’s Method
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Optimal bounds for aggregation of affine estimators
- Ordered linear smoothers
- Pivotal estimation via square-root lasso in nonparametric regression
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Regularization and the small-ball method. I: Sparse recovery
- Scaled sparse linear regression
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sharp oracle inequalities for aggregation of affine estimators
- Simultaneous analysis of Lasso and Dantzig selector
- Slope meets Lasso: improved oracle bounds and optimality
- Some Comments on C P
- Sparse estimation by exponential weighting
- Sparse matrix inversion with scaled Lasso
- Statistical significance in high-dimensional linear models
- Statistics for high-dimensional data. Methods, theory and applications.
- The Lasso problem and uniqueness
- The degrees of freedom of the Lasso for general design matrix
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators
- Weak convergence and empirical processes. With applications to statistics
Cited in
(11)- Asymptotic normality of robust M-estimators with convex penalty
- Noise covariance estimation in multi-task high-dimensional linear models
- High-dimensional asymptotics of likelihood ratio tests in the Gaussian sequence model under convex constraints
- Stein's identities and the related topics: an instructive explanation on shrinkage, characterization, normal approximation and goodness-of-fit
- Degrees of freedom for piecewise Lipschitz estimators
- Universality of regularized regression estimators in high dimensions
- Stein's method for negatively associated random variables with applications to second-order stationary random fields
- Inadmissibility of the corrected Akaike information criterion
- Debiasing convex regularized estimators and interval estimation in linear models
- De-biasing the Lasso with degrees-of-freedom adjustment
- The Lasso with general Gaussian designs with applications to hypothesis testing
This page was built for publication: Second-order Stein: SURE for SURE and other applications in high-dimensional inference
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2054467)