Slope meets Lasso: improved oracle bounds and optimality
From MaRDI portal
Publication:1990596
DOI10.1214/17-AOS1670zbMath1405.62056arXiv1605.08651OpenAlexW2962710557WikidataQ105584294 ScholiaQ105584294MaRDI QIDQ1990596
Alexandre B. Tsybakov, Guillaume Lecué, Pierre C. Bellec
Publication date: 25 October 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1605.08651
Nonparametric regression and quantile regression (62G08) Estimation in multivariate analysis (62H12) Linear regression; mixed models (62J05) Nonparametric estimation (62G05) Minimax procedures in statistical decision theory (62C20)
Related Items
Fundamental barriers to high-dimensional regression with convex penalties, Canonical thresholding for nonsparse high-dimensional linear regression, Adaptive robust estimation in sparse vector model, An efficient semismooth Newton method for adaptive sparse signal recovery problems, On the prediction loss of the Lasso in the partially labeled setting, Iterative algorithm for discrete structure recovery, Penalized least square in sparse setting with convex penalty and non Gaussian errors, Sparse index clones via the sorted ℓ1-Norm, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, On Kendall's regression, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Characterizing the SLOPE trade-off: a variational perspective and the Donoho-Tanner limit, Estimation of the \(\ell_2\)-norm and testing in sparse linear regression with unknown variance, Sparse optimization via vector \(k\)-norm and DC programming with an application to feature selection for support vector machines, Regularization, sparse recovery, and median-of-means tournaments, Debiased lasso for generalized linear models with a diverging number of covariates, Debiasing convex regularized estimators and interval estimation in linear models, Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression, Robust machine learning by median-of-means: theory and practice, Retire: robust expectile regression in high dimensions, On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes, The Lasso with general Gaussian designs with applications to hypothesis testing, An alternative to synthetic control for models with many covariates under sparsity, On the asymptotic properties of SLOPE, Structural inference in sparse high-dimensional vector autoregressions, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, Adaptive Huber Regression, Oracle inequalities for high-dimensional prediction, Improved bounds for square-root Lasso and square-root slope, Sharp oracle inequalities for least squares estimators in shape restricted regression, Facilitating OWL norm minimizations, Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process, Sorted concave penalized regression, Degrees of freedom in submodular regularization: a computational perspective of Stein's unbiased risk estimate, A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning, Iteratively reweighted \(\ell_1\)-penalized robust regression, On the sparsity of Mallows model averaging estimator, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, Unnamed Item, Unnamed Item, Unnamed Item, Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses, A Unifying Tutorial on Approximate Message Passing
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Upper bounds on product and multiplier empirical processes
- Optimal detection of sparse principal components in high dimension
- On the prediction performance of the Lasso
- Sparse recovery under weak moment assumptions
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Oracle inequalities and optimal inference under group sparsity
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- SLOPE-adaptive variable selection via convex optimization
- How well can we estimate a sparse vector?
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- MAP model selection in Gaussian regression
- Simultaneous analysis of Lasso and Dantzig selector
- Tail bounds via generic chaining
- Learning without Concentration
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Introduction to nonparametric estimation