Robust estimation via generalized quasi-gradients
From MaRDI portal
Publication:5095264
DOI10.1093/IMAIAI/IAAB018zbMATH Open1493.62183arXiv2005.14073OpenAlexW3191689423MaRDI QIDQ5095264FDOQ5095264
Authors: Banghua Zhu, Jiantao Jiao, Jacob Steinhardt
Publication date: 5 August 2022
Published in: Information and Inference: A Journal of the IMA (Search for Journal in Brave)
Abstract: We explore why many recently proposed robust estimation problems are efficiently solvable, even though the underlying optimization problems are non-convex. We study the loss landscape of these robust estimation problems, and identify the existence of "generalized quasi-gradients". Whenever these quasi-gradients exist, a large family of low-regret algorithms are guaranteed to approximate the global minimum; this includes the commonly-used filtering algorithm. For robust mean estimation of distributions under bounded covariance, we show that any first-order stationary point of the associated optimization problem is an {approximate global minimum} if and only if the corruption level . Consequently, any optimization algorithm that aproaches a stationary point yields an efficient robust estimator with breakdown point . With careful initialization and step size, we improve this to , which is optimal. For other tasks, including linear regression and joint mean and covariance estimation, the loss landscape is more rugged: there are stationary points arbitrarily far from the global minimum. Nevertheless, we show that generalized quasi-gradients exist and construct efficient algorithms. These algorithms are simpler than previous ones in the literature, and for linear regression we improve the estimation error from to the optimal rate of for small assuming certified hypercontractivity. For mean estimation with near-identity covariance, we show that a simple gradient descent algorithm achieves breakdown point and iteration complexity .
Full work available at URL: https://arxiv.org/abs/2005.14073
Recommendations
- Robust Estimation via Robust Gradient Estimation
- The robust generalized least-squares estimator
- Robust estimation for nonparametric generalized regression
- scientific article; zbMATH DE number 802799
- Robust estimators for generalized linear models
- scientific article; zbMATH DE number 2114054
- Quasi-likelihood and/or robust estimation in high dimensions
- On a Class of Optimization-Based Robust Estimators
- scientific article; zbMATH DE number 1839714
Cited In (12)
- Efficient learning with robust gradient descent
- Robustifying Markowitz
- Generalized Robust Shrinkage Estimator and Its Application to STAP Detection Problem
- Resilience: a criterion for learning in the presence of arbitrary outliers
- Nearly optimal robust mean estimation via empirical characteristic function
- Robust estimation in inverse problems via quantile coupling
- The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models
- Efficient algorithms and lower bounds for robust linear regression
- \(\sqrt n\)-consistent robust integration-based estimation
- Robust supervised learning with coordinate gradient descent
- Median-truncated gradient descent: a robust and scalable nonconvex approach for signal estimation
- Generalized resilience and robust statistics
This page was built for publication: Robust estimation via generalized quasi-gradients
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5095264)