Quantifying inductive bias: AI learning algorithms and Valiant's learning framework

From MaRDI portal
Publication:1106669

DOI10.1016/0004-3702(88)90002-1zbMath0651.68104OpenAlexW1994022788MaRDI QIDQ1106669

David Haussler

Publication date: 1988

Published in: Artificial Intelligence (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/0004-3702(88)90002-1



Related Items

Theory refinement combining analytical and empirical methods, Iterative versionspaces, The minimum feature set problem, DNA sequencing and string learning, Autonomous theory building systems, Shifting vocabulary bias in speedup learning, A sufficient condition for polynomial distribution-dependent learnability, An approach to guided learning of Boolean functions, The gap between abstract and concrete results in machine learning, PALO: a probabilistic hill-climbing algorithm, Partial Occam's Razor and its applications, System design and evaluation using discrete event simulation with AI, Noise modelling and evaluating learning from examples, Inductive constraint logic, Advanced discretization techniques for hyperelastic physics-augmented neural networks, Solving the multiple instance problem with axis-parallel rectangles., Parameterized Learnability of k-Juntas and Related Problems, AN ALTERNATIVE METHOD OF CONCEPT LEARNING, Learning the set covering machine by bound minimization and margin-sparsity trade-off, Learning nested concept classes with limited storage, Inductive logic programming, Embedding decision-analytic control in a learning architecture, Principles of metareasoning, Constraint acquisition, Learning decision trees with taxonomy of propositionalized attributes, Decision theoretic generalizations of the PAC model for neural net and other learning applications, Reasoning about model accuracy, Bounding sample size with the Vapnik-Chervonenkis dimension, Hyperrelations in version space, Precise induction from statistical data, Improved learning of \(k\)-parities, On the fusion of threshold classifiers for categorization and dimensionality reduction, Computational sample complexity and attribute-efficient learning, Robust logics, Sharpening Occam's razor, Prediction-preserving reducibility, Specification and simulation of statistical query algorithms for efficiency and noise tolerance, Learning decision trees from random examples, A general lower bound on the number of examples needed for learning, Scaling, machine learning, and genetic neural nets, Parameterized learnability of juntas, Finite electro-elasticity with physics-augmented neural networks, A reduction algorithm meeting users' requirements., Combinatorics and connectionism, A result of Vapnik with applications, A computational study on the performance of artificial neural networks under changing structural design and data distribution, The complexity of theory revision, Version spaces and the consistency problem



Cites Work