Penalised robust estimators for sparse and high-dimensional linear models
DOI10.1007/S10260-020-00511-ZzbMATH Open1474.62182OpenAlexW3003886944MaRDI QIDQ2664993FDOQ2664993
Authors: Anestis Antoniadis, Umberto Amato, Italia De Feis, Irène Gijbels
Publication date: 18 November 2021
Published in: Statistical Methods and Applications (Search for Journal in Brave)
Full work available at URL: https://lirias.kuleuven.be/handle/123456789/649460
Recommendations
- Nonconcave penalized M-estimation with a diverging number of parameters
- M-estimation in high-dimensional linear model
- Robust and sparse estimators for linear regression models
- Robust sparse regression with high-breakdown value
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
outliersvariable selectionregularizationhigh-dimensional regressioncontaminationwavelet thresholdingnonconvex penalties
Nonparametric regression and quantile regression (62G08) Nonparametric robustness (62G35) Estimation in multivariate analysis (62H12)
Cites Work
- Sparse least trimmed squares regression for analyzing high-dimensional large data sets
- Robust Linear Model Selection Based on Least Angle Regression
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Title not available (Why is that?)
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- Lasso-type recovery of sparse representations for high-dimensional data
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Regularization and Variable Selection Via the Elastic Net
- Robust Estimation of a Location Parameter
- Robust Statistics
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Hedonic housing prices and the demand for clean air
- Asymptotics for Lasso-type estimators.
- Sparsity oracle inequalities for the Lasso
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Scaled sparse linear regression
- Adaptive Lasso for sparse high-dimensional regression models
- Unified LASSO Estimation by Least Squares Approximation
- Title not available (Why is that?)
- Title not available (Why is that?)
- \(\ell_{1}\)-penalization for mixture regression models
- Robust regression through the Huber's criterion and adaptive lasso penalty
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Regularization of Wavelet Approximations
- Title not available (Why is that?)
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- Penalized likelihood regression for generalized linear models with non-quadratic penalties
- Comments on: \(\ell _{1}\)-penalization for mixture regression models
- Comments on: \(\ell_{1}\)-penalization for mixture regression models
- Weighted LAD-LASSO method for robust parameter estimation and variable selection in regression
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Wavelet methods in statistics: some recent developments and their applications
- Outlier detection using nonconvex penalized regression
- Wavelet estimation of partially linear models
- Information-Theoretic Limits on Sparsity Recovery in the High-Dimensional and Noisy Setting
- Penalized Composite Quasi-Likelihood for Ultrahigh Dimensional Variable Selection
- Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh
- Asymptotic behavior of general M-estimates for regression and scale with random carriers
- Robust and sparse estimators for linear regression models
- Introduction to robust and quasi-robust statistical methods
- Robust and consistent variable selection in high-dimensional generalized linear models
- Techniques for nonlinear least squares and robust regression
- Penalized partially linear models using sparse representations with an application to fMRI time series
- Fully efficient robust estimation, outlier detection, and variable selection via penalized regression
- Robust nonnegative garrote variable selection in linear regression
- The power of monitoring: how to make the most of a contaminated multivariate sample
- Robust elastic net estimators for variable selection and identification of proteomic biomarkers
- New robust variable selection methods for linear regression models
- Consistency and robustness properties of the S-nonnegative garrote estimator
Cited In (31)
- Generalized thresholding estimators for high-dimensional location parameters
- Robust and sparse estimators for linear regression models
- Robust sparse regression with high-breakdown value
- Simultaneous feature selection and outlier detection with optimality guarantees
- Large-scale regression with non-convex loss and penalty
- A general family of trimmed estimators for robust high-dimensional data analysis
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- On robust regression with high-dimensional predictors
- A tuning-free robust and efficient approach to high-dimensional regression
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Ultrahigh dimensional variable selection through the penalized maximum trimmed likelihood estimator
- Influence functions for penalized M-estimators
- Nonconcave penalized M-estimation with a diverging number of parameters
- Iteratively reweighted \(\ell_1\)-penalized robust regression
- Robust statistical boosting with quantile-based adaptive loss functions
- Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression
- M-estimation in high-dimensional linear model
- Penalized robust estimators in sparse logistic regression
- Approximate separability of symmetrically penalized least squares in high dimensions: characterization and consequences
- A method of least absolute deviation estimator with adaptive weighted penalty
- Wavelet-based robust estimation and variable selection in nonparametric additive models
- Bootstrap estimation of the proportion of outliers in robust regression
- Robust subset selection
- Robustness in sparse high-dimensional linear models: relative efficiency and robust approximate message passing
- Penalized wavelet estimation and robust denoising for irregular spaced data
- Robust elastic net estimators for variable selection and identification of proteomic biomarkers
- The influence function of penalized regression estimators
- M-estimation and model identification based on double SCAD penalization
- Sparse least trimmed squares regression for analyzing high-dimensional large data sets
- Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
Uses Software
This page was built for publication: Penalised robust estimators for sparse and high-dimensional linear models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2664993)