The no-free-lunch theorems of supervised learning
From MaRDI portal
Publication:6187752
Abstract: The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.
Recommendations
Cites work
- scientific article; zbMATH DE number 6011007 (Why is no real title available?)
- scientific article; zbMATH DE number 96338 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 1098878 (Why is no real title available?)
- scientific article; zbMATH DE number 1179314 (Why is no real title available?)
- scientific article; zbMATH DE number 914840 (Why is no real title available?)
- scientific article; zbMATH DE number 3327941 (Why is no real title available?)
- scientific article; zbMATH DE number 3060434 (Why is no real title available?)
- Absolutely no free lunches!
- Advanced Lectures on Machine Learning
- Black box optimization, machine learning, and no-free lunch theorems
- Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation
- Catching up Faster by Switching Sooner: A Predictive Approach to Adaptive Estimation with an Application to the AIC–BIC Dilemma
- Convergence rates of posterior distributions.
- Determination and the no-free-lunch paradox
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Inductive logic
- Local induction
- Mechanizing induction
- Minimum description length revisited
- No free lunch versus Occam's razor in supervised learning
- Nonparametric Bayesian model selection and averaging
- On the marginal likelihood and cross-validation
- Pattern classification.
- Pattern recognition and machine learning.
- Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach
- Putnam's diagonal argument and the impossibility of a universal learning machine
- Simple explanation of the no-free-lunch theorem and its implications
- Statistical learning theory: models, concepts, and results
- Suboptimal behavior of Bayes and MDL in classification under misspecification
- The Oxford handbook of probability and philosophy
- Understanding machine learning. From theory to algorithms
- Would “Direct Realism” Resolve the Classical Problem of Induction?
This page was built for publication: The no-free-lunch theorems of supervised learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6187752)