SLOPE-adaptive variable selection via convex optimization
DOI10.1214/15-AOAS842zbMATH Open1454.62212arXiv1407.3824OpenAlexW1916786071WikidataQ40166307 ScholiaQ40166307MaRDI QIDQ902886FDOQ902886
Authors: Małgorzata Bogdan, Ewout van den Berg, Chiara Sabatti, Weijie Su, Emmanuel J. Candès, Małgorzata Bogdan, Ewout van Den Berg, Chiara Sabatti, Weijie Su, Emmanuel J. Candès
Publication date: 4 January 2016
Published in: The Annals of Applied Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1407.3824
Recommendations
Lassovariable selectionfalse discovery ratesparse regressionsorted \(\ell_{1}\) penalized estimation (SLOPE)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Templates for convex cone problems with applications to sparse signal recovery
- The Adaptive Lasso and Its Oracle Properties
- Title not available (Why is that?)
- Some Comments on C P
- A new look at the statistical model identification
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- \(p\)-values for high-dimensional regression
- Title not available (Why is that?)
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Introductory lectures on convex optimization. A basic course.
- Minimax detection of a signal for \(l^ n\)-balls.
- A significance test for the lasso
- Adapting to unknown sparsity by controlling the false discovery rate
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Scaled sparse linear regression
- Valid post-selection inference
- Statistical significance in high-dimensional linear models
- Gaussian model selection
- \(\ell_{1}\)-penalization for mixture regression models
- High-dimensional variable selection
- Nonmetric multidimensional scaling. A numerical method
- Relaxed Lasso
- The risk inflation criterion for multiple regression
- Controlling the false discovery rate via knockoffs
- Projections onto order simplexes
- Some results on false discovery rate in stepwise multiple testing procedures.
- Title not available (Why is that?)
- Active set algorithms for isotonic regression; a unifying framework
- SLOPE-adaptive variable selection via convex optimization
- Tweedie’s Formula and Selection Bias
- Asymptotic Bayes-optimality under sparsity of some multiple testing procedures
- Model selection and sharp asymptotic minimaxity
- The Covariance Inflation Criterion for Adaptive Model Selection
- Local asymptotic coding and the minimum description length
- False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters
- Model selection by multiple test procedures
- A simple forward selection procedure based on false discovery rate control
- Title not available (Why is that?)
- Some optimality properties of FDR controlling rules under sparsity
Cited In (83)
- SLOPE-adaptive variable selection via convex optimization
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- Optimal false discovery control of minimax estimators
- False Discovery Rate Control via Data Splitting
- Estimating minimum effect with outlier selection
- A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery
- Adapting Regularized Low-Rank Models for Parallel Architectures
- Overcoming the limitations of phase transition by higher order analysis of regularization techniques
- On spike and slab empirical Bayes multiple testing
- Sorted concave penalized regression
- Title not available (Why is that?)
- Facilitating OWL norm minimizations
- A Unifying Tutorial on Approximate Message Passing
- Familywise error rate control via knockoffs
- Canonical thresholding for nonsparse high-dimensional linear regression
- Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method
- Regularization and the small-ball method. I: Sparse recovery
- Improved bounds for square-root Lasso and square-root slope
- Efficient projection algorithms onto the weighted \(\ell_1\) ball
- Oracle inequalities for high-dimensional prediction
- Slope meets Lasso: improved oracle bounds and optimality
- Online rules for control of false discovery rate and false discovery exceedance
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Sharp oracle inequalities for low-complexity priors
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Robust machine learning by median-of-means: theory and practice
- Empirical Bayes cumulative \(\ell\)-value multiple testing procedure for sparse sequences
- Detecting multiple replicating signals using adaptive filtering procedures
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- Proximal operator for the sorted \(\ell_1\) norm: application to testing procedures based on SLOPE
- On the asymptotic properties of SLOPE
- A flexible shrinkage operator for fussy grouped variable selection
- Model selection with mixed variables on the Lasso path
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Adaptive Huber Regression
- Sharp Oracle Inequalities for Square Root Regularization
- Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles
- Variable selection consistency of Gaussian process regression
- Predictor ranking and false discovery proportion control in high-dimensional regression
- Variable selection via adaptive false negative control in linear regression
- On the exponentially weighted aggregate with the Laplace prior
- Variable selection with Hamming loss
- Simple expressions of the LASSO and SLOPE estimators in low-dimension
- Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints
- Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate
- On the sparsity of Mallows model averaging estimator
- Randomized Gradient Boosting Machine
- The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty
- False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation
- Regularization and the small-ball method II: complexity dependent error rates
- Iterative gradient descent for outlier detection
- Title not available (Why is that?)
- Learning from MOM's principles: Le Cam's approach
- Statistical proof? The problem of irreproducibility
- Fundamental barriers to high-dimensional regression with convex penalties
- Iterative algorithm for discrete structure recovery
- Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines
- A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics
- An easily implementable algorithm for efficient projection onto the ordered weighted \(\ell_1\) norm ball
- Adaptive novelty detection with false discovery rate guarantee
- Robust and tuning-free sparse linear regression via square-root slope
- Dual formulation of the sparsity constrained optimization problem: application to classification
- Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal
- Low-rank tensor regression for selection of grouped variables
- Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit
- Independently Interpretable Lasso for Generalized Linear Models
- On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes
- Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem
- Approximate Selective Inference via Maximum Likelihood
- Optimized variable selection via repeated data splitting
- A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- Nonconvex Dantzig selector and its parallel computing algorithm
- Variable Selection With Second-Generation P-Values
- Efficient path algorithms for clustered Lasso and OSCAR
- Sparse index clones via the sorted ℓ1-Norm
- Group SLOPE – Adaptive Selection of Groups of Predictors
- geneSLOPE
- Nested model averaging on solution path for high-dimensional linear regression
- Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry
- Classification in high dimension with an Alternative Feature Augmentation via Nonparametrics and Selection (AFANS)
- Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models
- Adaptive Bayesian SLOPE: Model Selection With Incomplete Data
Uses Software
This page was built for publication: SLOPE-adaptive variable selection via convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q902886)