SLOPE is adaptive to unknown sparsity and asymptotically minimax
From MaRDI portal
Publication:292875
DOI10.1214/15-AOS1397zbMath1338.62032arXiv1503.08393OpenAlexW2963943067MaRDI QIDQ292875
Publication date: 9 June 2016
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1503.08393
SLOPEadaptivitysparse regressionBenjamini-Hochberg procedurefalse discovery rate (FDR)FDR thresholding
Nonparametric hypothesis testing (62G10) Nonparametric estimation (62G05) Minimax procedures in statistical decision theory (62C20) Paired and multiple comparisons; multiple testing (62J15)
Related Items
Fundamental barriers to high-dimensional regression with convex penalties, Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem, Iterative algorithm for discrete structure recovery, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Adaptive Huber regression on Markov-dependent data, Sparse index clones via the sorted ℓ1-Norm, Bayesian factor-adjusted sparse regression, The Spike-and-Slab LASSO, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Group SLOPE – Adaptive Selection of Groups of Predictors, Predictor ranking and false discovery proportion control in high-dimensional regression, Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit, Fundamental limits of weak recovery with applications to phase retrieval, Optimal false discovery control of minimax estimators, Robust machine learning by median-of-means: theory and practice, On the asymptotic properties of SLOPE, On spike and slab empirical Bayes multiple testing, RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs, Oracle inequalities for high-dimensional prediction, Improved bounds for square-root Lasso and square-root slope, Bayesian estimation of sparse signals with a continuous spike-and-slab prior, Slope meets Lasso: improved oracle bounds and optimality, Debiasing the Lasso: optimal sample size for Gaussian designs, Regularization and the small-ball method. I: Sparse recovery, Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process, Learning from MOM's principles: Le Cam's approach, Variable selection via adaptive false negative control in linear regression, Sorted concave penalized regression, On the exponentially weighted aggregate with the Laplace prior, Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate, Sharp oracle inequalities for low-complexity priors, Simple expressions of the LASSO and SLOPE estimators in low-dimension, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Sharp Oracle Inequalities for Square Root Regularization, Regularization and the small-ball method II: complexity dependent error rates, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, Unnamed Item, Iterative gradient descent for outlier detection, A Unifying Tutorial on Approximate Message Passing
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Gaussian graphical model estimation with false discovery rate control
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- A nonparametric empirical Bayes approach to adaptive minimax estimation
- Near-ideal model selection by \(\ell _{1}\) minimization
- Controlling the false discovery rate via knockoffs
- SLOPE-adaptive variable selection via convex optimization
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Lasso-type recovery of sparse representations for high-dimensional data
- A data-driven block thresholding approach to wavelet estimation
- Estimation of the mean of a multivariate normal distribution
- Minimax estimation of the mean of a normal distribution when the parameter space is restricted
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Model selection for Gaussian regression with random design
- The control of the false discovery rate in multiple testing under dependency.
- Asymptotic equivalence theory for nonparametric regression with random design
- The risk inflation criterion for multiple regression
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- On the conditions used to prove oracle results for the Lasso
- Model selection and sharp asymptotic minimaxity
- A significance test for the lasso
- Simultaneous analysis of Lasso and Dantzig selector
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Adapting to unknown sparsity by controlling the false discovery rate
- Nonmetric multidimensional scaling. A numerical method
- Counting faces of randomly projected polytopes when the projection radically lowers dimension
- Adapting to Unknown Smoothness via Wavelet Shrinkage
- Model selection for regression on a random design
- A two stage shrinkage testimator for the mean of an exponential distribution
- The Covariance Inflation Criterion for Adaptive Model Selection
- Ideal spatial adaptation by wavelet shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Estimating False Discovery Proportion Under Arbitrary Covariance Dependence
- Local asymptotic coding and the minimum description length
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- The LASSO Risk for Gaussian Matrices
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Exponential Bounds Implying Construction of Compressed Sensing Matrices, Error-Correcting Codes, and Neighborly Polytopes by Random Sampling
- Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR
- Stable signal recovery from incomplete and inaccurate measurements
- Inequalities: theory of majorization and its applications
- Gaussian model selection