SLOPE is adaptive to unknown sparsity and asymptotically minimax
From MaRDI portal
(Redirected from Publication:292875)
Abstract: We consider high-dimensional sparse regression problems in which we observe , where is an design matrix and is an -dimensional vector of independent Gaussian errors, each with variance . Our focus is on the recently introduced SLOPE estimator ((Bogdan et al., 2014)), which regularizes the least-squares estimates with the rank-dependent penalty , where is the th largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of are i.i.d.~, we show that SLOPE, with weights just about equal to ( is the th quantile of a standard normal and is a fixed number in ) achieves a squared error of estimation obeying [ sup_{| �eta|_0 le k} ,, mathbb{P} left(| hat{�eta}_{ ext{SLOPE}} - �eta |^2 > (1+epsilon) , 2sigma^2 k log(p/k)
ight) longrightarrow 0 ] as the dimension increases to , and where is an arbitrary small constant. This holds under a weak assumption on the -sparsity level, namely, and , and is sharp in the sense that this is the best possible error any estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of -sparsity classes. We are not aware of any other estimator with this property.
Recommendations
Cites work
- scientific article; zbMATH DE number 720689 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 3390139 (Why is no real title available?)
- A data-driven block thresholding approach to wavelet estimation
- A nonparametric empirical Bayes approach to adaptive minimax estimation
- A significance test for the lasso
- A two stage shrinkage testimator for the mean of an exponential distribution
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Adapting to unknown sparsity by controlling the false discovery rate
- Aggregation for Gaussian regression
- Asymptotic equivalence theory for nonparametric regression with random design
- Controlling the false discovery rate via knockoffs
- Counting faces of randomly projected polytopes when the projection radically lowers dimension
- Estimating false discovery proportion under arbitrary covariance dependence
- Estimation of the mean of a multivariate normal distribution
- Exponential Bounds Implying Construction of Compressed Sensing Matrices, Error-Correcting Codes, and Neighborly Polytopes by Random Sampling
- Gaussian graphical model estimation with false discovery rate control
- Gaussian model selection
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Ideal spatial adaptation by wavelet shrinkage
- Inequalities: theory of majorization and its applications
- Lasso-type recovery of sparse representations for high-dimensional data
- Local asymptotic coding and the minimum description length
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Minimax estimation of the mean of a normal distribution when the parameter space is restricted
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Model selection and sharp asymptotic minimaxity
- Model selection for Gaussian regression with random design
- Model selection for regression on a random design
- Near-ideal model selection by \(\ell _{1}\) minimization
- Nearly unbiased variable selection under minimax concave penalty
- Nonmetric multidimensional scaling. A numerical method
- On the conditions used to prove oracle results for the Lasso
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- SLOPE-adaptive variable selection via convex optimization
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
- Simultaneous analysis of Lasso and Dantzig selector
- Stable signal recovery from incomplete and inaccurate measurements
- The Adaptive Lasso and Its Oracle Properties
- The Covariance Inflation Criterion for Adaptive Model Selection
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The LASSO Risk for Gaussian Matrices
- The control of the false discovery rate in multiple testing under dependency.
- The risk inflation criterion for multiple regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(47)- On the exponentially weighted aggregate with the Laplace prior
- Slope meets Lasso: improved oracle bounds and optimality
- SLOPE-adaptive variable selection via convex optimization
- A Unifying Tutorial on Approximate Message Passing
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Stab-GKnock: controlled variable selection for partially linear models using generalized knockoffs
- Fundamental barriers to high-dimensional regression with convex penalties
- Regularization and the small-ball method. I: Sparse recovery
- Bayesian factor-adjusted sparse regression
- Group SLOPE – Adaptive Selection of Groups of Predictors
- Iterative algorithm for discrete structure recovery
- Learning from MOM's principles: Le Cam's approach
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Predictor ranking and false discovery proportion control in high-dimensional regression
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- Does SLOPE outperform bridge regression?
- Sharp oracle inequalities for low-complexity priors
- Adaptive Bayesian SLOPE: Model Selection With Incomplete Data
- The spike-and-slab LASSO
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Optimal false discovery control of minimax estimators
- Nearly optimal minimax estimator for high-dimensional sparse linear regression
- Sharp oracle inequalities for square root regularization
- A sharp oracle inequality for graph-slope
- Nested model averaging on solution path for high-dimensional linear regression
- Sparse index clones via the sorted \(\ell_1\)-norm
- Low-rank tensor regression for selection of grouped variables
- Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem
- Improved bounds for square-root Lasso and square-root slope
- On spike and slab empirical Bayes multiple testing
- Robust machine learning by median-of-means: theory and practice
- Simple expressions of the Lasso and SLOPE estimators in low-dimension
- Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate
- Sorted concave penalized regression
- Fundamental limits of weak recovery with applications to phase retrieval
- Sharp multiple testing boundary for sparse sequences
- On the asymptotic properties of SLOPE
- Regularization and the small-ball method. II: Complexity dependent error rates
- Adaptive Huber regression on Markov-dependent data
- RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs
- Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process
- Scalable approximations for generalized linear problems
- Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit
- Oracle inequalities for high-dimensional prediction
- Variable selection via adaptive false negative control in linear regression
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses
- Iterative gradient descent for outlier detection
This page was built for publication: SLOPE is adaptive to unknown sparsity and asymptotically minimax
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q292875)