SLOPE is adaptive to unknown sparsity and asymptotically minimax
DOI10.1214/15-AOS1397zbMATH Open1338.62032arXiv1503.08393OpenAlexW2963943067MaRDI QIDQ292875FDOQ292875
Authors: Weijie Su, Emmanuel J. Candès
Publication date: 9 June 2016
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1503.08393
Recommendations
SLOPEadaptivitysparse regressionBenjamini-Hochberg procedurefalse discovery rate (FDR)FDR thresholding
Nonparametric estimation (62G05) Nonparametric hypothesis testing (62G10) Minimax procedures in statistical decision theory (62C20) Paired and multiple comparisons; multiple testing (62J15)
Cites Work
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Gaussian graphical model estimation with false discovery rate control
- Ideal spatial adaptation by wavelet shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- Lasso-type recovery of sparse representations for high-dimensional data
- Estimation of the mean of a multivariate normal distribution
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Title not available (Why is that?)
- Estimating false discovery proportion under arbitrary covariance dependence
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Inequalities: theory of majorization and its applications
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- The control of the false discovery rate in multiple testing under dependency.
- A significance test for the lasso
- Adapting to unknown sparsity by controlling the false discovery rate
- Gaussian model selection
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Nonmetric multidimensional scaling. A numerical method
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- The risk inflation criterion for multiple regression
- Stable signal recovery from incomplete and inaccurate measurements
- Controlling the false discovery rate via knockoffs
- Aggregation for Gaussian regression
- Title not available (Why is that?)
- SLOPE-adaptive variable selection via convex optimization
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Minimax estimation of the mean of a normal distribution when the parameter space is restricted
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Asymptotic equivalence theory for nonparametric regression with random design
- Near-ideal model selection by \(\ell _{1}\) minimization
- A data-driven block thresholding approach to wavelet estimation
- Model selection for Gaussian regression with random design
- Model selection and sharp asymptotic minimaxity
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Counting faces of randomly projected polytopes when the projection radically lowers dimension
- Model selection for regression on a random design
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- A two stage shrinkage testimator for the mean of an exponential distribution
- A nonparametric empirical Bayes approach to adaptive minimax estimation
- The Covariance Inflation Criterion for Adaptive Model Selection
- Local asymptotic coding and the minimum description length
- The LASSO Risk for Gaussian Matrices
- Exponential Bounds Implying Construction of Compressed Sensing Matrices, Error-Correcting Codes, and Neighborly Polytopes by Random Sampling
Cited In (47)
- SLOPE-adaptive variable selection via convex optimization
- Optimal false discovery control of minimax estimators
- Nearly optimal minimax estimator for high-dimensional sparse linear regression
- Simple expressions of the Lasso and SLOPE estimators in low-dimension
- Sharp oracle inequalities for square root regularization
- On spike and slab empirical Bayes multiple testing
- Sorted concave penalized regression
- A Unifying Tutorial on Approximate Message Passing
- Bayesian estimation of sparse signals with a continuous spike-and-slab prior
- Regularization and the small-ball method. I: Sparse recovery
- The spike-and-slab LASSO
- Improved bounds for square-root Lasso and square-root slope
- Debiasing the Lasso: optimal sample size for Gaussian designs
- Sparse index clones via the sorted \(\ell_1\)-norm
- Oracle inequalities for high-dimensional prediction
- Slope meets Lasso: improved oracle bounds and optimality
- Fundamental limits of weak recovery with applications to phase retrieval
- Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Sharp oracle inequalities for low-complexity priors
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Low-rank tensor regression for selection of grouped variables
- Robust machine learning by median-of-means: theory and practice
- Regularization and the small-ball method. II: Complexity dependent error rates
- Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit
- Bayesian factor-adjusted sparse regression
- Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem
- Adaptive Huber regression on Markov-dependent data
- Sharp multiple testing boundary for sparse sequences
- RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs
- On the asymptotic properties of SLOPE
- Does SLOPE outperform bridge regression?
- Predictor ranking and false discovery proportion control in high-dimensional regression
- Variable selection via adaptive false negative control in linear regression
- On the exponentially weighted aggregate with the Laplace prior
- A sharp oracle inequality for graph-slope
- Scalable approximations for generalized linear problems
- Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate
- Stab-GKnock: controlled variable selection for partially linear models using generalized knockoffs
- Group SLOPE – Adaptive Selection of Groups of Predictors
- Nested model averaging on solution path for high-dimensional linear regression
- Iterative gradient descent for outlier detection
- Learning from MOM's principles: Le Cam's approach
- Adaptive Bayesian SLOPE: Model Selection With Incomplete Data
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses
- Fundamental barriers to high-dimensional regression with convex penalties
- Iterative algorithm for discrete structure recovery
Uses Software
This page was built for publication: SLOPE is adaptive to unknown sparsity and asymptotically minimax
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q292875)