Robustness in sparse high-dimensional linear models: relative efficiency and robust approximate message passing
From MaRDI portal
Publication:502845
DOI10.1214/16-EJS1212zbMATH Open1357.62215arXiv1507.08726OpenAlexW2563552746MaRDI QIDQ502845FDOQ502845
Authors: Jelena Bradic
Publication date: 11 January 2017
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Abstract: Understanding efficiency in high dimensional linear models is a longstanding problem of interest. Classical work with smaller dimensional problems dating back to Huber and Bickel has illustrated the benefits of efficient loss functions. When the number of parameters is of the same order as the sample size , , an efficiency pattern different from the one of Huber was recently established. In this work, we consider the effects of model selection on the estimation efficiency of penalized methods. In particular, we explore whether sparsity, results in new efficiency patterns when . In the interest of deriving the asymptotic mean squared error for regularized M-estimators, we use the powerful framework of approximate message passing. We propose a novel, robust and sparse approximate message passing algorithm (RAMP), that is adaptive to the error distribution. Our algorithm includes many non-quadratic and non-differentiable loss functions. We derive its asymptotic mean squared error and show its convergence, while allowing , with and . We identify new patterns of relative efficiency regarding a number of penalized estimators, when is much larger than . We show that the classical information bound is no longer reachable, even for light--tailed error distributions. We show that the penalized least absolute deviation estimator dominates the penalized least square estimator, in cases of heavy--tailed distributions. We observe this pattern for all choices of the number of non-zero parameters , both and . In non-penalized problems where , the opposite regime holds. Therefore, we discover that the presence of model selection significantly changes the efficiency patterns.
Full work available at URL: https://arxiv.org/abs/1507.08726
Recommendations
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Robust error density estimation in ultrahigh dimensional sparse linear model
- Robust and sparse multinomial regression in high dimensions
- Penalised robust estimators for sparse and high-dimensional linear models
- Robustness to Unknown Error in Sparse Regularization
- Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning
- Robust and sparse regression in generalized linear model by stochastic optimization
- Robust and sparse estimators for linear regression models
- Robust and sparse learning of varying coefficient models with high-dimensional features
- Robust sparse Gaussian graphical modeling
Nonparametric robustness (62G35) Ridge regression; shrinkage estimators (Lasso) (62J07) Central limit and other weak theorems (60F05)
Cited In (7)
- Overcoming the limitations of phase transition by higher order analysis of regularization techniques
- Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Automatic bias correction for testing in high‐dimensional linear models
- A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression
- Scale calibration for high-dimensional robust regression
- Detangling robustness in high dimensions: composite versus model-averaged estimation
This page was built for publication: Robustness in sparse high-dimensional linear models: relative efficiency and robust approximate message passing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q502845)