SLOPE is adaptive to unknown sparsity and asymptotically minimax

From MaRDI portal
Revision as of 02:09, 30 January 2024 by Import240129110155 (talk | contribs) (Created automatically from import240129110155)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:292875

DOI10.1214/15-AOS1397zbMath1338.62032arXiv1503.08393OpenAlexW2963943067MaRDI QIDQ292875

Weijie Su, Emmanuel J. Candès

Publication date: 9 June 2016

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1503.08393




Related Items (39)

Fundamental barriers to high-dimensional regression with convex penaltiesSafe Rules for the Identification of Zeros in the Solutions of the SLOPE ProblemIterative algorithm for discrete structure recoverySLOPE is adaptive to unknown sparsity and asymptotically minimaxAdaptive Huber regression on Markov-dependent dataSparse index clones via the sorted ℓ1-NormBayesian factor-adjusted sparse regressionThe Spike-and-Slab LASSOAdaptive Bayesian SLOPE: Model Selection With Incomplete DataGroup SLOPE – Adaptive Selection of Groups of PredictorsPredictor ranking and false discovery proportion control in high-dimensional regressionCharacterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limitFundamental limits of weak recovery with applications to phase retrievalOptimal false discovery control of minimax estimatorsRobust machine learning by median-of-means: theory and practiceOn the asymptotic properties of SLOPEOn spike and slab empirical Bayes multiple testingRANK: Large-Scale Inference With Graphical Nonlinear KnockoffsOracle inequalities for high-dimensional predictionImproved bounds for square-root Lasso and square-root slopeBayesian estimation of sparse signals with a continuous spike-and-slab priorSlope meets Lasso: improved oracle bounds and optimalityDebiasing the Lasso: optimal sample size for Gaussian designsRegularization and the small-ball method. I: Sparse recoverySparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck processLearning from MOM's principles: Le Cam's approachVariable selection via adaptive false negative control in linear regressionSorted concave penalized regressionOn the exponentially weighted aggregate with the Laplace priorDegrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimateSharp oracle inequalities for low-complexity priorsSimple expressions of the LASSO and SLOPE estimators in low-dimensionEstimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functionsSharp Oracle Inequalities for Square Root RegularizationRegularization and the small-ball method II: complexity dependent error ratesNonregular and minimax estimation of individualized thresholds in high dimension with binary responsesUnnamed ItemIterative gradient descent for outlier detectionA Unifying Tutorial on Approximate Message Passing


Uses Software


Cites Work


This page was built for publication: SLOPE is adaptive to unknown sparsity and asymptotically minimax