SLOPE-adaptive variable selection via convex optimization
From MaRDI portal
Abstract: We introduce a new estimator for the vector of coefficients in the linear model , where has dimensions with possibly larger than . SLOPE, short for Sorted L-One Penalized Estimation, is the solution to [min_{binmathbb{R}^p}frac{1}{2}Vert y-XbVert _{ell_2}^2+lambda_1vert bvert _{(1)}+lambda_2vert bvert_{(2)}+cdots+lambda_pvert bvert_{(p)},] where and are the decreasing absolute values of the entries of . This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical procedures such as the Lasso. Here, the regularizer is a sorted norm, which penalizes the regression coefficients according to their rank: the higher the rank - that is, stronger the signal - the larger the penalty. This is similar to the Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289-300] procedure (BH) which compares more significant -values with more stringent thresholds. One notable choice of the sequence is given by the BH critical values , where and is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with provably controls FDR at level . Moreover, it also appears to have appreciable inferential properties under more general designs while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.
Recommendations
Cites work
- scientific article; zbMATH DE number 720689 (Why is no real title available?)
- scientific article; zbMATH DE number 802850 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 3390139 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A new look at the statistical model identification
- A significance test for the lasso
- A simple forward selection procedure based on false discovery rate control
- Active set algorithms for isotonic regression; a unifying framework
- Adapting to unknown sparsity by controlling the false discovery rate
- Asymptotic Bayes-optimality under sparsity of some multiple testing procedures
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Controlling the false discovery rate via knockoffs
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters
- Gaussian model selection
- High-dimensional variable selection
- Hypothesis Testing in High-Dimensional Regression Under the Gaussian Random Design Model: Asymptotic Theory
- Introductory lectures on convex optimization. A basic course.
- Local asymptotic coding and the minimum description length
- Minimax detection of a signal for \(l^ n\)-balls.
- Model selection and sharp asymptotic minimaxity
- Model selection by multiple test procedures
- Nonmetric multidimensional scaling. A numerical method
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Projections onto order simplexes
- Relaxed Lasso
- SLOPE-adaptive variable selection via convex optimization
- Scaled sparse linear regression
- Some Comments on C P
- Some optimality properties of FDR controlling rules under sparsity
- Some results on false discovery rate in stepwise multiple testing procedures.
- Statistical significance in high-dimensional linear models
- Templates for convex cone problems with applications to sparse signal recovery
- The Adaptive Lasso and Its Oracle Properties
- The Covariance Inflation Criterion for Adaptive Model Selection
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The risk inflation criterion for multiple regression
- Tweedie’s Formula and Selection Bias
- Valid post-selection inference
- \(\ell_{1}\)-penalization for mixture regression models
- \(p\)-values for high-dimensional regression
Cited in
(84)- scientific article; zbMATH DE number 7306923 (Why is no real title available?)
- On the exponentially weighted aggregate with the Laplace prior
- Slope meets Lasso: improved oracle bounds and optimality
- Variable selection with Hamming loss
- Adaptive Huber Regression
- SLOPE-adaptive variable selection via convex optimization
- Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles
- Efficient projection algorithms onto the weighted \(\ell_1\) ball
- A Unifying Tutorial on Approximate Message Passing
- Familywise error rate control via knockoffs
- Model selection with mixed variables on the Lasso path
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery
- Canonical thresholding for nonsparse high-dimensional linear regression
- Fundamental barriers to high-dimensional regression with convex penalties
- Overcoming the limitations of phase transition by higher order analysis of regularization techniques
- Regularization and the small-ball method. I: Sparse recovery
- False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation
- Variable selection consistency of Gaussian process regression
- Iterative algorithm for discrete structure recovery
- Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints
- Learning from MOM's principles: Le Cam's approach
- Predictor ranking and false discovery proportion control in high-dimensional regression
- Empirical Bayes cumulative \(\ell\)-value multiple testing procedure for sparse sequences
- Does SLOPE outperform bridge regression?
- Sharp oracle inequalities for low-complexity priors
- Proximal operator for the sorted \(\ell_1\) norm: application to testing procedures based on SLOPE
- High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}
- On the sparsity of Mallows model averaging estimator
- scientific article; zbMATH DE number 7307489 (Why is no real title available?)
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Optimal false discovery control of minimax estimators
- Sharp oracle inequalities for square root regularization
- The trimmed Lasso: sparse recovery guarantees and practical optimization by the generalized soft-min penalty
- Detecting multiple replicating signals using adaptive filtering procedures
- Improved bounds for square-root Lasso and square-root slope
- Randomized Gradient Boosting Machine
- Estimating minimum effect with outlier selection
- On spike and slab empirical Bayes multiple testing
- Robust machine learning by median-of-means: theory and practice
- Simple expressions of the Lasso and SLOPE estimators in low-dimension
- Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate
- Sorted concave penalized regression
- Adapting regularized low-rank models for parallel architectures
- On the asymptotic properties of SLOPE
- Regularization and the small-ball method. II: Complexity dependent error rates
- A flexible shrinkage operator for fussy grouped variable selection
- Online rules for control of false discovery rate and false discovery exceedance
- Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- Oracle inequalities for high-dimensional prediction
- Facilitating OWL norm minimizations
- Variable selection via adaptive false negative control in linear regression
- Solving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian method
- Iterative gradient descent for outlier detection
- An easily implementable algorithm for efficient projection onto the ordered weighted \(\ell_1\) norm ball
- Adaptive novelty detection with false discovery rate guarantee
- Optimized variable selection via repeated data splitting
- Approximate Selective Inference via Maximum Likelihood
- Variable Selection With Second-Generation P-Values
- Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal
- Classification in high dimension with an Alternative Feature Augmentation via Nonparametrics and Selection (AFANS)
- Independently interpretable Lasso for generalized linear models
- Group SLOPE – Adaptive Selection of Groups of Predictors
- A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression
- Robust and tuning-free sparse linear regression via square-root slope
- Dual formulation of the sparsity constrained optimization problem: application to classification
- On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes
- Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models
- Adaptive Bayesian SLOPE: Model Selection With Incomplete Data
- Nested model averaging on solution path for high-dimensional linear regression
- Sparse index clones via the sorted \(\ell_1\)-norm
- Low-rank tensor regression for selection of grouped variables
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem
- Nonconvex Dantzig selector and its parallel computing algorithm
- A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics
- False Discovery Rate Control via Data Splitting
- geneSLOPE
- Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry
- Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit
- Statistical proof? The problem of irreproducibility
- Efficient path algorithms for clustered Lasso and OSCAR
This page was built for publication: SLOPE-adaptive variable selection via convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q902886)