SLOPE-adaptive variable selection via convex optimization

From MaRDI portal
Publication:902886

DOI10.1214/15-AOAS842zbMath1454.62212arXiv1407.3824OpenAlexW1916786071WikidataQ40166307 ScholiaQ40166307MaRDI QIDQ902886

Chiara Sabatti, Ewout van den Berg, Małgorzata Bogdan, Emmanuel J. Candès, Weijie Su, Małgorzata Bogdan, Chiara Sabatti, Weijie Su, Ewout van Den Berg, Emmanuel J. Candès

Publication date: 4 January 2016

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1407.3824



Related Items

Fundamental barriers to high-dimensional regression with convex penalties, Canonical thresholding for nonsparse high-dimensional linear regression, Efficient projection algorithms onto the weighted \(\ell_1\) ball, Familywise error rate control via knockoffs, Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem, Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method, Iterative algorithm for discrete structure recovery, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Variable Selection With Second-Generation P-Values, Empirical Bayes cumulative \(\ell\)-value multiple testing procedure for sparse sequences, Model Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find Needles, Sparse index clones via the sorted ℓ1-Norm, Proximal operator for the sorted \(\ell_1\) norm: application to testing procedures based on SLOPE, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, Estimating minimum effect with outlier selection, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Group SLOPE – Adaptive Selection of Groups of Predictors, Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints, Predictor ranking and false discovery proportion control in high-dimensional regression, Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit, Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Optimal false discovery control of minimax estimators, Hedonic pricing modelling with unstructured predictors: an application to Italian fashion industry, Statistical proof? The problem of irreproducibility, A power analysis for Model-X knockoffs with \(\ell_p\)-regularized statistics, False Discovery Rate Control via Data Splitting, Robust machine learning by median-of-means: theory and practice, An easily implementable algorithm for efficient projection onto the ordered weighted \(\ell_1\) norm ball, Adaptive novelty detection with false discovery rate guarantee, SLOPE-adaptive variable selection via convex optimization, On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes, Approximate Selective Inference via Maximum Likelihood, A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression, On the asymptotic properties of SLOPE, On spike and slab empirical Bayes multiple testing, Independently Interpretable Lasso for Generalized Linear Models, Adaptive Huber Regression, Oracle inequalities for high-dimensional prediction, Improved bounds for square-root Lasso and square-root slope, Adapting Regularized Low-Rank Models for Parallel Architectures, Slope meets Lasso: improved oracle bounds and optimality, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, Online rules for control of false discovery rate and false discovery exceedance, Regularization and the small-ball method. I: Sparse recovery, The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty, Facilitating OWL norm minimizations, Learning from MOM's principles: Le Cam's approach, A flexible shrinkage operator for fussy grouped variable selection, Variable selection via adaptive false negative control in linear regression, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, Sorted concave penalized regression, False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation, Model selection with mixed variables on the Lasso path, Variable selection with Hamming loss, On the exponentially weighted aggregate with the Laplace prior, Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate, Iteratively reweighted \(\ell_1\)-penalized robust regression, On the sparsity of Mallows model averaging estimator, Randomized Gradient Boosting Machine, Variable selection consistency of Gaussian process regression, Sharp oracle inequalities for low-complexity priors, Simple expressions of the LASSO and SLOPE estimators in low-dimension, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Sharp Oracle Inequalities for Square Root Regularization, Regularization and the small-ball method II: complexity dependent error rates, Unnamed Item, Unnamed Item, A consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recovery, Detecting multiple replicating signals using adaptive filtering procedures, Iterative gradient descent for outlier detection, A Unifying Tutorial on Approximate Message Passing, Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal


Uses Software


Cites Work