Quantifying inductive bias: AI learning algorithms and Valiant's learning framework (Q1106669): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Queries and concept learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Occam's razor / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learnability and the Vapnik-Chervonenkis dimension / rank
 
Normal rank
Property / cites work
 
Property / cites work: An analytical comparison of some rule-learning programs / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Greedy Heuristic for the Set-Covering Problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Enumeration of Seven-Argument Threshold Functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning decision trees from random examples / rank
 
Normal rank
Property / cites work
 
Property / cites work: A general lower bound on the number of examples needed for learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4198056 / rank
 
Normal rank
Property / cites work
 
Property / cites work: \(\epsilon\)-nets and simplex range queries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation algorithms for combinatorial problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Knapsacks, Partitions, and a New Dynamic Programming Technique for Trees / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4386952 / rank
 
Normal rank
Property / cites work
 
Property / cites work: ON THE CONNECTION BETWEEN THE COMPLEXITY AND CREDIBILITY OF INFERRED MODELS / rank
 
Normal rank
Property / cites work
 
Property / cites work: Modeling by shortest data description / rank
 
Normal rank
Property / cites work
 
Property / cites work: A theory of the learnable / rank
 
Normal rank
Property / cites work
 
Property / cites work: Computational limitations on learning from examples / rank
 
Normal rank
Property / cites work
 
Property / cites work: Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Some special Vapnik-Chervonenkis classes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning structural shape descriptions from examples / rank
 
Normal rank

Latest revision as of 17:22, 18 June 2024

scientific article
Language Label Description Also known as
English
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
scientific article

    Statements

    Quantifying inductive bias: AI learning algorithms and Valiant's learning framework (English)
    0 references
    0 references
    1988
    0 references
    We show that the notion of inductive bias in concept learning can be quantified in a way that directly relates to learning performance in the framework recently introduced by Valiant. Our measure of bias is based on the growth function introduced by Vapnik and Chervonenkis, and on the Vapnik-Chervonenkis dimension. We measure some common language biases, including restriction to conjunctive concepts, conjunctive concepts with internal disjunction, k-DNF and k-CNF concepts. We also measure certain types of bias that result from a preference for simpler hypotheses. Using these bias measurements we analyze the performance of the classical learning algorithm for conjunctive concepts from the perspective of Valiant's learning framework. We then augment this algorithm with a hypothesis simplification routine that uses a greedy heuristic and show how this improves learning performance on simpler target concepts. Improved learning algorithms are also developed for conjunctive concepts with internal disjunction, k-DNF and k-CNF concepts. We show that all our algorithms are within a logarithmic factor of optimal in terms of the number of examples they require to achieve a given level of leaning performance in the Valiant framework. Our results hold for arbitrary attribute-based instance spaces defined by either tree-structured or linear attributes.
    0 references
    inductive bias
    0 references
    concept learning
    0 references
    bias measurements
    0 references

    Identifiers