Statistical consistency and asymptotic normality for high-dimensional robust M-estimators
From MaRDI portal
Publication:2012209
Abstract: We study theoretical properties of regularized robust M-estimators, applicable when data are drawn from a sparse high-dimensional linear model and contaminated by heavy-tailed distributions and/or outliers in the additive errors and covariates. We first establish a form of local statistical consistency for the penalized regression estimators under fairly mild conditions on the error distribution: When the derivative of the loss function is bounded and satisfies a local restricted curvature condition, all stationary points within a constant radius of the true regression vector converge at the minimax rate enjoyed by the Lasso with sub-Gaussian errors. When an appropriate nonconvex regularizer is used in place of an l_1-penalty, we show that such stationary points are in fact unique and equal to the local oracle solution with the correct support---hence, results on asymptotic normality in the low-dimensional case carry over immediately to the high-dimensional setting. This has important implications for the efficiency of regularized nonconvex M-estimators when the errors are heavy-tailed. Our analysis of the local curvature of the loss function also has useful consequences for optimization when the robust regression function and/or regularizer is nonconvex and the objective function possesses stationary points outside the local region. We show that as long as a composite gradient descent algorithm is initialized within a constant radius of the true regression vector, successive iterates will converge at a linear rate to a stationary point within the local region. Furthermore, the global optimum of a convex regularized robust regression function may be used to obtain a suitable initialization. The result is a novel two-step procedure that uses a convex M-estimator to achieve consistency and a nonconvex M-estimator to increase efficiency.
Recommendations
- Nonconcave penalized M-estimation with a diverging number of parameters
- Iteratively reweighted _1-penalized robust regression
- Penalised robust estimators for sparse and high-dimensional linear models
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- On robust regression with high-dimensional predictors
Cited in
(only showing first 100 items - show all)- High-dimensional robust approximated M-estimators for mean regression with asymmetric data
- High-dimensional outlier detection and variable selection via adaptive weighted mean regression
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- A Bernstein-type inequality for high dimensional linear processes with applications to robust estimation of time series regressions
- Penalized maximum likelihood estimation with nonparametric Gaussian scale mixture errors
- Analysis of global and local optima of regularized quantile regression in high dimensions: a subgradient approach
- Tractable Bayesian variable selection: beyond normality
- M-estimators for models with a mix of discrete and continuous parameters
- Computational and statistical analyses for robust non-convex sparse regularized regression problem
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- The main contributions of robust statistics to statistical science and a new challenge
- D4R: doubly robust reduced rank regression in high dimension
- Sparse estimation and inference for prediction-powered semi-supervised linear regression
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Variance prior forms for high-dimensional Bayesian variable selection
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- A Unified Algorithm for Penalized Convolution Smoothed Quantile Regression
- Model selection for varying coefficient nonparametric transformation model
- A New Principle for Tuning-Free Huber Regression
- A unified theory of confidence regions and testing for high-dimensional estimating equations
- Simultaneous feature selection and outlier detection with optimality guarantees
- High dimensional generalized linear models for temporal dependent data
- Batch policy learning in average reward Markov decision processes
- Robust Matrix Completion with Heavy-Tailed Noise
- A general family of trimmed estimators for robust high-dimensional data analysis
- Robust variable selection and estimation via adaptive elastic net S-estimators for linear regression
- Support estimation and sign recovery in high-dimensional heteroscedastic mean regression
- High dimensional logistic regression under network dependence
- Retire: robust expectile regression in high dimensions
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- Minimum distance Lasso for robust high-dimensional regression
- Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies
- The finite sample properties of sparse M-estimators with pseudo-observations
- Overview of robust variable selection methods for high-dimensional linear regression model
- An Interpretable and Efficient Infinite-Order Vector Autoregressive Model for High-Dimensional Time Series
- On robust regression with high-dimensional predictors
- Graphical-model based high dimensional generalized linear models
- Robustness and Tractability for Non-convex M-estimators
- Regression analysis: likelihood, error and entropy
- Asymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk Minimization
- Regularized adaptive Huber matrix regression and distributed learning
- Robust matrix estimations meet Frank-Wolfe algorithm
- All-in-one robust estimator of the Gaussian mean
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Two Gaussian regularization methods for time-varying networks
- Penalised robust estimators for sparse and high-dimensional linear models
- High dimensional robust M-estimation: asymptotic variance via approximate message passing
- Robust multi-task regression with shifting low-rank patterns
- Manifold sampling for optimization of nonconvex functions that are piecewise linear compositions of smooth components
- A tuning-free robust and efficient approach to high-dimensional regression
- A high-dimensional M-estimator framework for bi-level variable selection
- Robust two-stage estimation in general spatial dynamic panel data models
- Asymptotic behaviour of penalized robust estimators in logistic regression when dimension increases
- Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO
- On model selection consistency of regularized M-estimators
- Variable smoothing for weakly convex composite functions
- Rejoinder to “A Tuning-Free Robust and Efficient Approach to High-Dimensional Regression”
- scientific article; zbMATH DE number 7306869 (Why is no real title available?)
- Adaptive Huber regression on Markov-dependent data
- Nonconcave penalized M-estimation with a diverging number of parameters
- Robust high-dimensional tuning free multiple testing
- Structure learning via unstructured kernel-based M-estimation
- Concentration study of M-estimators using the influence function
- The robust nearest shrunken centroids classifier for high-dimensional heavy-tailed data
- Oracle inequalities for local and global empirical risk minimizers
- How do noise tails impact on deep ReLU networks?
- A refined convergence analysis of \(\mathrm{pDCA}_{e}\) with applications to simultaneous sparse recovery and outlier detection
- Iteratively reweighted _1-penalized robust regression
- Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates
- Rate-optimal robust estimation of high-dimensional vector autoregressive models
- Asymptotic linear expansion of regularized M-estimators
- Inference for high-dimensional linear expectile regression with de-biasing method
- M-estimation in high-dimensional linear model
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems
- Robust inference for high‐dimensional single index models
- A novel robust estimation for high-dimensional precision matrices
- Sharp non-asymptotic performance bounds for _1 and Huber robust regression estimators
- Adaptive sparse group LASSO in quantile regression
- A statistical learning assessment of Huber regression
- High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms
- Renewable Huber estimation method for streaming datasets
- Transfer learning for high-dimensional data with heavy-tailed noise: a sparse convoluted rank regression method
- Comment: Feature Screening and Variable Selection via Iterative Ridge Regression
- High-dimensional robust regression with \(L_q\)-loss functions
- The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models
- Communication-Efficient Distributed Sparse Learning with Oracle Property and Geometric Convergence
- Robust multitask feature learning with adaptive Huber regressions
- Heavy Lasso: sparse penalized regression under heavy-tailed noise via data-augmented soft-thresholding
- Sparse regression for extreme values
- Wavelet-based robust estimation and variable selection in nonparametric additive models
- Robust and computationally efficient gradient-based estimation
- Sparse Laplacian shrinkage for nonparametric transformation survival model
- Robust selection and estimation for sparse multivariate functional nonparametric additive models via regularized huber regression
- Additive Bayesian variable selection under censoring and misspecification
- Identifiability and estimation of meta-elliptical copula generators
- Scale calibration for high-dimensional robust regression
This page was built for publication: Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2012209)