The no-free-lunch theorems of supervised learning
From MaRDI portal
Publication:6187752
DOI10.1007/S11229-021-03233-1zbMATH Open1529.68268arXiv2202.04513OpenAlexW3119241782WikidataQ113900472 ScholiaQ113900472MaRDI QIDQ6187752FDOQ6187752
Authors: Tom F. Sterkenburg, Peter D. Grünwald
Publication date: 1 February 2024
Published in: Synthese (Search for Journal in Brave)
Abstract: The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.
Full work available at URL: https://arxiv.org/abs/2202.04513
Recommendations
Learning and adaptive systems in artificial intelligence (68T05) Foundations and philosophical topics in statistics (62A01)
Cites Work
- Title not available (Why is that?)
- Pattern recognition and machine learning.
- Pattern classification.
- Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach
- Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation
- Convergence rates of posterior distributions.
- Title not available (Why is that?)
- Understanding machine learning. From theory to algorithms
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Nonparametric Bayesian model selection and averaging
- Title not available (Why is that?)
- The Oxford handbook of probability and philosophy
- Title not available (Why is that?)
- Advanced Lectures on Machine Learning
- Catching up Faster by Switching Sooner: A Predictive Approach to Adaptive Estimation with an Application to the AIC–BIC Dilemma
- Simple explanation of the no-free-lunch theorem and its implications
- Statistical learning theory: models, concepts, and results
- Inductive logic
- Title not available (Why is that?)
- Suboptimal behavior of Bayes and MDL in classification under misspecification
- Title not available (Why is that?)
- Local induction
- Mechanizing induction
- Putnam's diagonal argument and the impossibility of a universal learning machine
- No free lunch versus Occam's razor in supervised learning
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- On the marginal likelihood and cross-validation
- Minimum description length revisited
- Black box optimization, machine learning, and no-free lunch theorems
- Absolutely no free lunches!
- Determination and the no-free-lunch paradox
- Would “Direct Realism” Resolve the Classical Problem of Induction?
- Title not available (Why is that?)
- Title not available (Why is that?)
This page was built for publication: The no-free-lunch theorems of supervised learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6187752)