Learning from untrusted data
From MaRDI portal
Publication:4977960
Abstract: The vast majority of theoretical results in machine learning and statistics assume that the available training data is a reasonably reliable reflection of the phenomena to be learned or estimated. Similarly, the majority of machine learning and statistical techniques used in practice are brittle to the presence of large amounts of biased or malicious data. In this work we consider two frameworks in which to study estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, list-decodable learning, asks whether it is possible to return a list of answers, with the guarantee that at least one of them is accurate. For example, given a dataset of points for which an unknown subset of points are drawn from a distribution of interest, and no assumptions are made about the remaining points, is it possible to return a list of answers, one of which is correct? The second framework, which we term the semi-verified learning model, considers the extent to which a small dataset of trusted data (drawn from the distribution in question) can be leveraged to enable the accurate extraction of information from a much larger but untrusted dataset (of which only an -fraction is drawn from the distribution). We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This general result has immediate implications for robust estimation in a number of settings, including for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.
Recommendations
Cited in
(22)- Robust supervised learning with coordinate gradient descent
- Efficient parameter estimation of truncated Boolean product distributions
- Sampling correctors
- Low rank approximation in the presence of outliers
- A characterization of list learnability
- Algorithms approaching the threshold for semi-random planted clique
- Learning under \(p\)-tampering poisoning attacks
- Robust estimators in high-dimensions without the computational intractability
- Resilience: a criterion for learning in the presence of arbitrary outliers
- A theory of learning with corrupted labels
- Corruption-tolerant bandit learning
- Distributed statistical estimation and rates of convergence in normal approximation
- DLP learning from uncertain data
- Learning discrete distributions from untrusted batches
- Mean estimation and regression under heavy-tailed distributions: A survey
- Learning from Multiple Sources of Inaccurate Data
- Stronger data poisoning attacks break data sanitization defenses
- Brief announcement: Byzantine-tolerant machine learning
- Nearly optimal robust secret sharing against rushing adversaries
- Efficiently learning structured distributions from untrusted batches
- scientific article; zbMATH DE number 6866335 (Why is no real title available?)
- Best arm identification for contaminated bandits
This page was built for publication: Learning from untrusted data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4977960)