Robust estimators in high-dimensions without the computational intractability
DOI10.1137/17M1126680zbMATH Open1421.68149arXiv1604.06443OpenAlexW2942689850WikidataQ127954343 ScholiaQ127954343MaRDI QIDQ4634036FDOQ4634036
Authors: Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart
Publication date: 7 May 2019
Published in: SIAM Journal on Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1604.06443
Recommendations
- Robustly learning a Gaussian: getting optimal error, efficiently
- High-dimensional robust mean estimation in nearly-linear time
- List-decodable robust mean estimation and learning mixtures of spherical Gaussians
- Efficient algorithms and lower bounds for robust linear regression
- All-in-one robust estimator of the Gaussian mean
Density estimation (62G07) Nonparametric robustness (62G35) Learning and adaptive systems in artificial intelligence (68T05) Analysis of algorithms and problem complexity (68Q25)
Cites Work
- Elements of Information Theory
- Maximum Likelihood Estimation of Misspecified Models
- Title not available (Why is that?)
- Title not available (Why is that?)
- Robust principal component analysis?
- Title not available (Why is that?)
- Title not available (Why is that?)
- Robust Statistics
- Title not available (Why is that?)
- A novel M-estimator for robust PCA
- The Fourier transform of Poisson multinomial distributions and its algorithmic applications
- Adaptive estimation of a quadratic functional by model selection.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Distributional and \(L^q\) norm inequalities for polynomials over convex bodies in \(\mathbb{R}^n\)
- Geometric algorithms and combinatorial optimization
- On the learnability of discrete distributions
- Learning Mixtures of Product Distributions over Discrete Domains
- Combinatorial methods in density estimation
- Rates of convergence of minimum distance estimators and Kolmogorov's entropy
- Toward efficient agnostic learning
- Robust computation of linear models by convex relaxation
- Efficiently learning mixtures of two Gaussians
- Learning in the Presence of Malicious Errors
- On estimation of the diagonal elements of a sparse precision matrix
- The densest hemisphere problem
- A universally acceptable smoothing factor for kernel density estimates
- Nonasymptotic universal smoothing factors, kernel complexity and Yatracos classes
- Evolutionary trees can be learned in polynomial time in the two-state general Markov model
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Learning mixtures of arbitrary Gaussians
- Robust estimators in high-dimensions without the computational intractability
- Learning from untrusted data
- 10.1162/153244304773936072
- Resilience: a criterion for learning in the presence of arbitrary outliers
- Provable ICA with unknown Gaussian noise, and implications for Gaussian mixtures and autoencoders
- Efficient density estimation via piecewise polynomial approximation
- Sample-optimal density estimation in nearly-linear time
- Learning mixtures of spherical Gaussians: moment methods and spectral decompositions (extended abstract)
- Geometric median in nearly linear time
- Learning mixtures of structured distributions over discrete domains
- Robust moment estimation and improved clustering via sum of squares
- Tight bounds for learning a mixture of two Gaussians (extended abstract)
- List-decodable robust mean estimation and learning mixtures of spherical Gaussians
- Learning geometric concepts with nasty noise
- Learning from satisfying assignments
- Learning discrete distributions from untrusted batches
- Mixture models, robustness, and sum of squares proofs
- A size-free CLT for Poisson multinomials and its applications
- Title not available (Why is that?)
- Learning \(k\)-modal distributions via testing
- Learning Halfspaces with Malicious Noise
Cited In (37)
- A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
- Low rank approximation in the presence of outliers
- Robust regression via mutivariate regression depth
- Notes on computational hardness of hypothesis testing: predictions using the low-degree likelihood ratio
- Robust high dimensional expectation maximization algorithm via trimmed hard thresholding
- Maximum selection and sorting with adversarial comparators
- Robustly learning a Gaussian: getting optimal error, efficiently
- Mean estimation with sub-Gaussian rates in polynomial time
- On robustness and local differential privacy
- Robust estimators in high-dimensions without the computational intractability
- Robustness and Tractability for Non-convex M-estimators
- Stronger data poisoning attacks break data sanitization defenses
- All-in-one robust estimator of the Gaussian mean
- Confidence regions and minimax rates in outlier-robust estimation on the probability simplex
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Robust estimation of covariance matrices: adversarial contamination and beyond
- Robust inference and local algorithms
- Resilience: a criterion for learning in the presence of arbitrary outliers
- High-dimensional robust mean estimation in nearly-linear time
- Nearly optimal robust mean estimation via empirical characteristic function
- Efficient parameter estimation of truncated Boolean product distributions
- Mean estimation and regression under heavy-tailed distributions: A survey
- Graph powering and spectral robustness
- Robust classification via MOM minimization
- Multidimensional linear functional estimation in sparse Gaussian models and robust estimation of the mean
- Improved covariance estimation: optimal robustness and sub-Gaussian guarantees under heavy tails
- Robust multivariate mean estimation: the optimality of trimmed mean
- List-decodable robust mean estimation and learning mixtures of spherical Gaussians
- Efficient algorithms and lower bounds for robust linear regression
- Learning from untrusted data
- Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence
- Title not available (Why is that?)
- Robust supervised learning with coordinate gradient descent
- Corruption-tolerant bandit learning
- Learning under \(p\)-tampering poisoning attacks
- Generalized resilience and robust statistics
- Robust sub-Gaussian estimation of a mean vector in nearly linear time
Uses Software
This page was built for publication: Robust estimators in high-dimensions without the computational intractability
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4634036)