Learning in the Presence of Malicious Errors

From MaRDI portal
Publication:3137710

DOI10.1137/0222052zbMath0789.68118OpenAlexW1968998685MaRDI QIDQ3137710

Michael Kearns, Ming Li

Publication date: 10 October 1993

Published in: SIAM Journal on Computing (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/0222052




Related Items (41)

Stronger data poisoning attacks break data sanitization defensesBest Arm Identification for Contaminated BanditsDNA sequencing and string learningLearning conjunctions with noise under product distributionsOn domain-partitioning induction criteria: worst-case bounds for the worst-case basedToward efficient agnostic learningLearning to Recognize Three-Dimensional ObjectsCan PAC learning algorithms tolerate random attribute noise?Exact learning from an honest teacher that answers membership queriesLearning fallible deterministic finite automataAn experimental evaluation of simplicity in rule learningWatermarking Cryptographic CapabilitiesRobust multivariate mean estimation: the optimality of trimmed meanKnowing what doesn't matter: exploiting the omission of irrelevant dataLearning nested differences in the presence of malicious noiseData poisoning against information-theoretic feature selectionRecursive reasoning-based training-time adversarial machine learningOn the difficulty of approximately maximizing agreements.Learning under \(p\)-tampering poisoning attacksLearning Kernel Perceptrons on Noisy Data Using Random ProjectionsIncentive compatible regression learningMachine learning in adversarial environmentsThe security of machine learningRobust Estimators in High-Dimensions Without the Computational IntractabilityFour types of noise in data for PAC learningClassic learningLearning nested differences in the presence of malicious noiseMaximizing agreements with one-sided error with applications to heuristic learningMaximizing agreements with one-sided error with applications to heuristic learningRobust logicsBoosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss FunctionsAn Improved Branch-and-Bound Method for Maximum Monomial AgreementSpecification and simulation of statistical query algorithms for efficiency and noise toleranceLearning with unreliable boundary queriesLearning with restricted focus of attentionRobust and efficient mean estimation: an approach based on the properties of self-normalized sumsExact Learning of Discretized Geometric ConceptsPAC learning with nasty noise.Improved lower bounds for learning from noisy examples: An information-theoretic approachOn the robustness of randomized classifiers to adversarial examplesMaximizing agreements and coagnostic learning




This page was built for publication: Learning in the Presence of Malicious Errors