SLOPE-adaptive variable selection via convex optimization
From MaRDI portal
DOI10.1214/15-AOAS842zbMath1454.62212arXiv1407.3824OpenAlexW1916786071WikidataQ40166307 ScholiaQ40166307MaRDI QIDQ902886
Chiara Sabatti, Ewout van den Berg, Małgorzata Bogdan, Emmanuel J. Candès, Weijie Su, Małgorzata Bogdan, Chiara Sabatti, Weijie Su, Ewout van Den Berg, Emmanuel J. Candès
Publication date: 4 January 2016
Published in: The Annals of Applied Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1407.3824
variable selectionfalse discovery rateLassosparse regressionsorted \(\ell_{1}\) penalized estimation (SLOPE)
Related Items (73)
Fundamental barriers to high-dimensional regression with convex penalties ⋮ Canonical thresholding for nonsparse high-dimensional linear regression ⋮ Efficient projection algorithms onto the weighted \(\ell_1\) ball ⋮ Familywise error rate control via knockoffs ⋮ Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem ⋮ Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method ⋮ Iterative algorithm for discrete structure recovery ⋮ SLOPE is adaptive to unknown sparsity and asymptotically minimax ⋮ Variable Selection With Second-Generation P-Values ⋮ Empirical Bayes cumulative \(\ell\)-value multiple testing procedure for sparse sequences ⋮ Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles ⋮ Sparse index clones via the sorted ℓ1-Norm ⋮ Proximal operator for the sorted \(\ell_1\) norm: application to testing procedures based on SLOPE ⋮ Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models ⋮ Estimating minimum effect with outlier selection ⋮ Adaptive Bayesian SLOPE: Model Selection With Incomplete Data ⋮ Group SLOPE – Adaptive Selection of Groups of Predictors ⋮ Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints ⋮ Predictor ranking and false discovery proportion control in high-dimensional regression ⋮ Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit ⋮ Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines ⋮ A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates ⋮ Optimal false discovery control of minimax estimators ⋮ Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry ⋮ Statistical proof? The problem of irreproducibility ⋮ A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics ⋮ False Discovery Rate Control via Data Splitting ⋮ Robust machine learning by median-of-means: theory and practice ⋮ An easily implementable algorithm for efficient projection onto the ordered weighted \(\ell_1\) norm ball ⋮ Adaptive novelty detection with false discovery rate guarantee ⋮ SLOPE-adaptive variable selection via convex optimization ⋮ On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes ⋮ Approximate Selective Inference via Maximum Likelihood ⋮ A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression ⋮ On the asymptotic properties of SLOPE ⋮ On spike and slab empirical Bayes multiple testing ⋮ Independently Interpretable Lasso for Generalized Linear Models ⋮ Adaptive Huber Regression ⋮ Oracle inequalities for high-dimensional prediction ⋮ Improved bounds for square-root Lasso and square-root slope ⋮ Adapting Regularized Low-Rank Models for Parallel Architectures ⋮ Slope meets Lasso: improved oracle bounds and optimality ⋮ Overcoming the limitations of phase transition by higher order analysis of regularization techniques ⋮ Online rules for control of false discovery rate and false discovery exceedance ⋮ Regularization and the small-ball method. I: Sparse recovery ⋮ The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty ⋮ Facilitating OWL norm minimizations ⋮ Learning from MOM's principles: Le Cam's approach ⋮ A flexible shrinkage operator for fussy grouped variable selection ⋮ Variable selection via adaptive false negative control in linear regression ⋮ High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi} ⋮ Sorted concave penalized regression ⋮ False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation ⋮ Model selection with mixed variables on the Lasso path ⋮ Variable selection with Hamming loss ⋮ On the exponentially weighted aggregate with the Laplace prior ⋮ Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate ⋮ Iteratively reweighted \(\ell_1\)-penalized robust regression ⋮ On the sparsity of Mallows model averaging estimator ⋮ Randomized Gradient Boosting Machine ⋮ Variable selection consistency of Gaussian process regression ⋮ Sharp oracle inequalities for low-complexity priors ⋮ Simple expressions of the LASSO and SLOPE estimators in low-dimension ⋮ Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions ⋮ Sharp Oracle Inequalities for Square Root Regularization ⋮ Regularization and the small-ball method II: complexity dependent error rates ⋮ Unnamed Item ⋮ Unnamed Item ⋮ A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery ⋮ Detecting multiple replicating signals using adaptive filtering procedures ⋮ Iterative gradient descent for outlier detection ⋮ A Unifying Tutorial on Approximate Message Passing ⋮ Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On asymptotically optimal confidence regions and tests for high-dimensional models
- The Adaptive Lasso and Its Oracle Properties
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Valid post-selection inference
- Statistical significance in high-dimensional linear models
- \(\ell_{1}\)-penalization for mixture regression models
- Asymptotic Bayes-optimality under sparsity of some multiple testing procedures
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Active set algorithms for isotonic regression; a unifying framework
- High-dimensional variable selection
- Controlling the false discovery rate via knockoffs
- SLOPE-adaptive variable selection via convex optimization
- A simple forward selection procedure based on false discovery rate control
- Relaxed Lasso
- Projections onto order simplexes
- Introductory lectures on convex optimization. A basic course.
- Templates for convex cone problems with applications to sparse signal recovery
- Some results on false discovery rate in stepwise multiple testing procedures.
- Minimax detection of a signal for \(l^ n\)-balls.
- The risk inflation criterion for multiple regression
- Some optimality properties of FDR controlling rules under sparsity
- Model selection and sharp asymptotic minimaxity
- A significance test for the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Adapting to unknown sparsity by controlling the false discovery rate
- Nonmetric multidimensional scaling. A numerical method
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- p-Values for High-Dimensional Regression
- Scaled sparse linear regression
- Tweedie’s Formula and Selection Bias
- Model selection by multiple test procedures
- The Covariance Inflation Criterion for Adaptive Model Selection
- Local asymptotic coding and the minimum description length
- Some Comments on C P
- False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters
- Gaussian model selection
- A new look at the statistical model identification
This page was built for publication: SLOPE-adaptive variable selection via convex optimization