DOI10.1145/76359.76371zbMath0697.68079OpenAlexW2154952480WikidataQ56214749 ScholiaQ56214749MaRDI QIDQ3474905
David Haussler, Anselm Blumer, Andrzej Ehrenfeucht, Manfred K. Warmuth
Publication date: 1989
Published in: Journal of the ACM (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1145/76359.76371
On statistical learning of simplices: unmixing problem revisited ⋮
Dynamic sizing of multilayer perceptrons ⋮
Efficient distribution-free learning of probabilistic concepts ⋮
Nonuniform learnability ⋮
A review of combinatorial problems arising in feedforward neural network design ⋮
On universal learning algorithms ⋮
Halfspace learning, linear programming, and nonmalicious distributions ⋮
Learning Boolean concepts in the presence of many irrelevant features ⋮
The minimum feature set problem ⋮
Vapnik-Chervonenkis dimension and (pseudo-)hyperplane arrangements ⋮
Learning unions of high-dimensional boxes over the reals ⋮
First-order \(jk\)-clausal theories are PAC-learnable ⋮
Toward efficient agnostic learning ⋮
The learnability of description logics with equality constraints ⋮
Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension ⋮
The VC dimension of \(k\)-fold union ⋮
An algorithmic theory of learning: robust concepts and random projection ⋮
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework ⋮
On ordinal VC-dimension and some notions of complexity ⋮
From learning in the limit to stochastic finite learning ⋮
Learning intersections and thresholds of halfspaces ⋮
Exploiting label dependencies for improved sample complexity ⋮
VC dimensions of principal component analysis ⋮
Selection of relevant features and examples in machine learning ⋮
The Perceptron algorithm versus Winnow: linear versus logarithmic mistake bounds when few input variables are relevant ⋮
On the generalization error of fixed combinations of classifiers ⋮
On the relative sizes of learnable sets ⋮
Dynamical recognizers: real-time language recognition by analog computers ⋮
Supervised learning and co-training ⋮
PAC learning, VC dimension, and the arithmetic hierarchy ⋮
On the hardness of learning intersections of two halfspaces ⋮
Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited ⋮
Robust cutpoints in the logical analysis of numerical data ⋮
VC bounds on the cardinality of nearly orthogonal function classes ⋮
Generalization error bounds for the logical analysis of data ⋮
Improved approximation for guarding simple galleries from the perimeter ⋮
On learning a union of half spaces ⋮
Using the doubling dimension to analyze the generalization of learning algorithms ⋮
The value of agreement a new boosting algorithm ⋮
The Vapnik-Chervonenkis dimension of decision trees with bounded rank ⋮
Equivalence of models for polynomial learnability ⋮
Almost tight bounds for \(\epsilon\)-nets ⋮
Learning in parallel ⋮
Complexity of computing Vapnik-Chervonenkis dimension and some generalized dimensions ⋮
\(\varepsilon\)-approximations of \(k\)-label spaces ⋮
Classic learning ⋮
Neural networks with quadratic VC dimension ⋮
On the necessity of Occam algorithms ⋮
Decision theoretic generalizations of the PAC model for neural net and other learning applications ⋮
Three \(\sum^ P_ 2\)-complete problems in computational learning theory ⋮
A machine discovery from amino acid sequences by decision trees over regular patterns ⋮
Reducing mechanism design to algorithm design via machine learning ⋮
Structure identification in relational data ⋮
Aggregate operators in constraint query languages ⋮
Bounding sample size with the Vapnik-Chervonenkis dimension ⋮
Playing monotone games to understand learning behaviors ⋮
Analysis of a multi-category classifier ⋮
Learning in the limit with lattice-structured hypothesis spaces ⋮
Maximal pattern complexity, dual system and pattern recognition ⋮
\(k\)-Fold unions of low-dimensional concept classes ⋮
Partial observability and learnability ⋮
Fuzzy lattice reasoning (FLR) classifier and its application for ambient ozone estimation ⋮
Multi-category classifiers and sample width ⋮
Sharpening Occam's razor ⋮
A notion of task relatedness yielding provable multiple-task learning guarantees ⋮
An upper bound on the sample complexity of PAC-learning halfspaces with respect to the uniform distribution ⋮
Learning commutative deterministic finite state automata in polynomial time ⋮
Teachability in computational learning ⋮
Combinatorial variability of Vapnik-Chervonenkis classes with applications to sample compression schemes ⋮
Scale-sensitive dimensions and skeleton estimates for classification ⋮
Vapnik-Chervonenkis dimension of recurrent neural networks ⋮
The degree of approximation of sets in euclidean space using sets with bounded Vapnik-Chervonenkis dimension ⋮
Results on learnability and the Vapnik-Chervonenkis dimension ⋮
General bounds on statistical query learning and PAC learning with noise via hypothesis boosting ⋮
Optimal mistake bound learning is hard ⋮
Prediction-preserving reducibility ⋮
Prediction, learning, uniform convergence, and scale-sensitive dimensions ⋮
Specification and simulation of statistical query algorithms for efficiency and noise tolerance ⋮
Learning with unreliable boundary queries ⋮
Learning with restricted focus of attention ⋮
An introduction to some statistical aspects of PAC learning theory ⋮
Sample size lower bounds in PAC learning by Algorithmic Complexity Theory ⋮
Using a similarity measure for credible classification ⋮
Approximating hyper-rectangles: Learning and pseudorandom sets ⋮
New bounds on classical and quantum one-way communication complexity ⋮
Noise-tolerant parallel learning of geometric concepts ⋮
Maximal width learning of binary functions ⋮
Hitting sets when the VC-dimension is small ⋮
Deciding the Vapnik-Červonenkis dimension is \(\Sigma_3^p\)-complete ⋮
On the learnability of rich function classes ⋮
Combinatorics and connectionism ⋮
A result of Vapnik with applications ⋮
Discrepancy and approximations for bounded VC-dimension ⋮
Learnability with respect to fixed distributions ⋮
Similarity, kernels, and the fundamental constraints on cognition ⋮
Version spaces and the consistency problem ⋮
Bounding the vertex cover number of a hypergraph ⋮
Sparse approximate multiquadric interpolation ⋮
On learning embedded midbit functions ⋮
Maximizing agreements and coagnostic learning
This page was built for publication: Learnability and the Vapnik-Chervonenkis dimension