Quantifying inductive bias: AI learning algorithms and Valiant's learning framework (Q1106669)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
scientific article

    Statements

    Quantifying inductive bias: AI learning algorithms and Valiant's learning framework (English)
    0 references
    0 references
    1988
    0 references
    We show that the notion of inductive bias in concept learning can be quantified in a way that directly relates to learning performance in the framework recently introduced by Valiant. Our measure of bias is based on the growth function introduced by Vapnik and Chervonenkis, and on the Vapnik-Chervonenkis dimension. We measure some common language biases, including restriction to conjunctive concepts, conjunctive concepts with internal disjunction, k-DNF and k-CNF concepts. We also measure certain types of bias that result from a preference for simpler hypotheses. Using these bias measurements we analyze the performance of the classical learning algorithm for conjunctive concepts from the perspective of Valiant's learning framework. We then augment this algorithm with a hypothesis simplification routine that uses a greedy heuristic and show how this improves learning performance on simpler target concepts. Improved learning algorithms are also developed for conjunctive concepts with internal disjunction, k-DNF and k-CNF concepts. We show that all our algorithms are within a logarithmic factor of optimal in terms of the number of examples they require to achieve a given level of leaning performance in the Valiant framework. Our results hold for arbitrary attribute-based instance spaces defined by either tree-structured or linear attributes.
    0 references
    0 references
    inductive bias
    0 references
    concept learning
    0 references
    bias measurements
    0 references
    0 references