Notice: Unexpected clearActionName after getActionName already called in /var/www/html/includes/context/RequestContext.php on line 338
SLOPE-adaptive variable selection via convex optimization - MaRDI portal

SLOPE-adaptive variable selection via convex optimization

From MaRDI portal
(Redirected from Publication:5983049)
Publication:902886

DOI10.1214/15-AOAS842zbMath1454.62212arXiv1407.3824OpenAlexW1916786071WikidataQ40166307 ScholiaQ40166307MaRDI QIDQ902886

Chiara Sabatti, Ewout van den Berg, Małgorzata Bogdan, Emmanuel J. Candès, Weijie Su, Małgorzata Bogdan, Chiara Sabatti, Weijie Su, Ewout van Den Berg, Emmanuel J. Candès

Publication date: 4 January 2016

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1407.3824




Related Items (73)

Fundamental barriers to high-dimensional regression with convex penaltiesCanonical thresholding for nonsparse high-dimensional linear regressionEfficient projection algorithms onto the weighted \(\ell_1\) ballFamilywise error rate control via knockoffsSafe Rules for the Identification of Zeros in the Solutions of the SLOPE ProblemSolving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian MethodIterative algorithm for discrete structure recoverySLOPE is adaptive to unknown sparsity and asymptotically minimaxVariable Selection With Second-Generation P-ValuesEmpirical Bayes cumulative \(\ell\)-value multiple testing procedure for sparse sequencesModel Selection With Lasso-Zero: Adding Straw to the Haystack to Better Find NeedlesSparse index clones via the sorted ℓ1-NormProximal operator for the sorted \(\ell_1\) norm: application to testing procedures based on SLOPERates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse modelsEstimating minimum effect with outlier selectionAdaptive Bayesian SLOPE: Model Selection With Incomplete DataGroup SLOPE – Adaptive Selection of Groups of PredictorsFeasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraintsPredictor ranking and false discovery proportion control in high-dimensional regressionCharacterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limitSparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machinesA Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among CovariatesOptimal false discovery control of minimax estimatorsHedonic pricing modelling with unstructured predictors: an application to Italian fashion industryStatistical proof? The problem of irreproducibilityA power analysis for Model-X knockoffs with \(\ell_p\)-regularized statisticsFalse Discovery Rate Control via Data SplittingRobust machine learning by median-of-means: theory and practiceAn easily implementable algorithm for efficient projection onto the ordered weighted \(\ell_1\) norm ballAdaptive novelty detection with false discovery rate guaranteeSLOPE-adaptive variable selection via convex optimizationOn Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processesApproximate Selective Inference via Maximum LikelihoodA tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regressionOn the asymptotic properties of SLOPEOn spike and slab empirical Bayes multiple testingIndependently Interpretable Lasso for Generalized Linear ModelsAdaptive Huber RegressionOracle inequalities for high-dimensional predictionImproved bounds for square-root Lasso and square-root slopeAdapting Regularized Low-Rank Models for Parallel ArchitecturesSlope meets Lasso: improved oracle bounds and optimalityOvercoming the limitations of phase transition by higher order analysis of regularization techniquesOnline rules for control of false discovery rate and false discovery exceedanceRegularization and the small-ball method. I: Sparse recoveryThe Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min PenaltyFacilitating OWL norm minimizationsLearning from MOM's principles: Le Cam's approachA flexible shrinkage operator for fussy grouped variable selectionVariable selection via adaptive false negative control in linear regressionHigh-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}Sorted concave penalized regressionFalse Discovery Rate Control Under General Dependence By Symmetrized Data AggregationModel selection with mixed variables on the Lasso pathVariable selection with Hamming lossOn the exponentially weighted aggregate with the Laplace priorDegrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimateIteratively reweighted \(\ell_1\)-penalized robust regressionOn the sparsity of Mallows model averaging estimatorRandomized Gradient Boosting MachineVariable selection consistency of Gaussian process regressionSharp oracle inequalities for low-complexity priorsSimple expressions of the LASSO and SLOPE estimators in low-dimensionEstimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functionsSharp Oracle Inequalities for Square Root RegularizationRegularization and the small-ball method II: complexity dependent error ratesUnnamed ItemUnnamed ItemA consistent and numerically efficient variable selection method for sparse Poisson regression with applications to learning and signal recoveryDetecting multiple replicating signals using adaptive filtering proceduresIterative gradient descent for outlier detectionA Unifying Tutorial on Approximate Message PassingPattern recovery and signal denoising by SLOPE when the design matrix is orthogonal


Uses Software


Cites Work


This page was built for publication: SLOPE-adaptive variable selection via convex optimization