Noise modelling and evaluating learning from examples
From MaRDI portal
Publication:2674201
DOI10.1016/0004-3702(94)00094-8zbMATH Open1506.68095OpenAlexW2079788575MaRDI QIDQ2674201FDOQ2674201
Authors: Ray J. Hickey
Publication date: 22 September 2022
Published in: Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0004-3702(94)00094-8
Recommendations
Cites Work
- Title not available (Why is that?)
- Very simple classification rules perform well on most commonly used datasets
- Inequalities: theory of majorization and its applications
- A theory of the learnable
- Continuous majorisation and randomness
- Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
- Noise-tolerant Occam algorithms and their applications to learning decision trees
- Majorisation, randomness and some discrete distributions
- A note on the measurement of randomness
Cited In (9)
- Title not available (Why is that?)
- Learning concepts by arranging appropriate training order
- Title not available (Why is that?)
- Withdrawing an example from the training set: An analytic estimation of its effect on a non-linear parameterised model
- A robust approach to model-based classification based on trimming and constraints. Semi-supervised learning in presence of outliers and label noise
- Anomaly and novelty detection for robust semi-supervised learning
- Asymmetric Error Control Under Imperfect Supervision: A Label-Noise-Adjusted Neyman–Pearson Umbrella Algorithm
- Core clustering as a tool for tackling noise in cluster labels
- Title not available (Why is that?)
Uses Software
This page was built for publication: Noise modelling and evaluating learning from examples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2674201)