A theory of the learnable
From MaRDI portal
Publication:3714486
DOI10.1145/1968.1972zbMATH Open0587.68077OpenAlexW4238893454WikidataQ29398622 ScholiaQ29398622MaRDI QIDQ3714486FDOQ3714486
Authors: Leslie G. Valiant
Publication date: 1984
Published in: Communications of the ACM (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1145/1968.1972
Recommendations
Cited In (only showing first 100 items - show all)
- A sufficient condition for polynomial distribution-dependent learnability
- Getting CICY high
- Principles of metareasoning
- Three \(\sum^ P_ 2\)-complete problems in computational learning theory
- Evolvability via the Fourier transform
- Learnability of quantified formulas.
- Hybrid classification algorithms based on boosting and support vector machines
- Learning a subclass of regular patterns in polynomial time
- Equivalence of models for polynomial learnability
- Noise modelling and evaluating learning from examples
- Partial observability and learnability
- On-line learning of rectangles and unions of rectangles
- The learnability of description logics with equality constraints
- Instability, complexity, and evolution
- Recommendation systems: A probabilistic analysis
- Reasoning, nonmonotonicity and learning in connectionist networks that capture propositional knowledge
- The synthesis of language learners.
- Exploring margin setting for good generalization in multiple class discrimination
- Generating logical expressions from positive and negative examples via a branch-and-bound approach
- Playing monotone games to understand learning behaviors
- Polynomial time learning of simple deterministic languages via queries and a representative sample
- Learning a Random DFA from Uniform Strings and State Information
- A subexponential exact learning algorithm for DNF using equivalence queries
- Can PAC learning algorithms tolerate random attribute noise?
- Learning in the limit with lattice-structured hypothesis spaces
- Regression conformal prediction with random forests
- Proper learning of \(k\)-term DNF formulas from satisfying assignments
- Approaching utopia, strong truthfulness and externality-resistant mechanisms
- Resource restricted computability theoretic learning: Illustrative topics and problems
- From learning in the limit to stochastic finite learning
- The PAC-learnability of planning algorithms: Investigating simple planning domains
- Making the Most of Your Samples
- Probability and plurality for aggregations of learning machines
- Learnability of DNF with representation-specific queries
- Defaults and relevance in model-based reasoning
- On learning from queries and counterexamples in the presence of noise
- On the value of partial information for learning from examples
- On the inference of sequences of functions
- DNA sequencing and string learning
- Hardness of approximate two-level logic minimization and PAC learning with membership queries
- Learning random monotone DNF
- Learning from hints in neural networks
- Structural analysis of polynomial-time query learnability
- Some natural conditions on incremental learning
- Improved bounds on the sample complexity of learning
- On the necessity of Occam algorithms
- More efficient PAC-learning of DNF with membership queries under the uniform distribution
- Learning Boolean functions in \(AC^0\)on attribute and classification noise -- estimating an upper bound on attribute and classification noise
- Single-class classification with mapping convergence
- Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited
- An algebra of human concept learning
- Title not available (Why is that?)
- Double Horn functions
- On learning monotone DNF under product distributions
- Simple learning algorithms using divide and conquer
- Bootstrap -- an exploration
- Error-free and best-fit extensions of partially defined Boolean functions
- On the complexity of learning strings and sequences
- On learning embedded midbit functions
- Labeled compression schemes for extremal classes
- Four types of noise in data for PAC learning
- Complexity of computing Vapnik-Chervonenkis dimension and some generalized dimensions
- Shortest consistent superstrings computable in polynomial time
- Towards a mathematical theory of machine discovery from facts
- On the limits of proper learnability of subclasses of DNF formulas
- Neural networks with quadratic VC dimension
- Learning intersections of halfspaces with a margin
- A machine discovery from amino acid sequences by decision trees over regular patterns
- Sharpening Occam's razor
- Cryptographic hardness for learning intersections of halfspaces
- Prediction-preserving reducibility
- Supervised learning and co-training
- An analysis of model-based interval estimation for Markov decision processes
- Logic-based neural networks
- MAT learners for tree series: an abstract data type and two realizations
- An Asymptotic Statistical Theory of Polynomial Kernel Methods
- Learning orthogonal F-Horn formulas
- Learning from examples with unspecified attribute values.
- Learning orthogonal F-Horn formulas
- The degree of approximation of sets in euclidean space using sets with bounded Vapnik-Chervonenkis dimension
- Shadow tomography of quantum states
- A time-series modeling method based on the boosting gradient-descent theory
- Languages as hyperplanes: grammatical inference with string kernels
- Construction and learnability of canonical Horn formulas
- Space-bounded communication complexity
- Probabilistic Inductive Logic Programming
- Classifier-based constraint acquisition
- A Boolean measure of similarity
- Submodular functions: learnability, structure, and optimization
- Classic learning
- The use of tail inequalities on the probable computational time of randomized search heuristics
- Using a similarity measure for credible classification
- How to grow a mind: statistics, structure, and abstraction
- Improving optical Fourier pattern recognition by accommodating the missing information
- Mining probabilistic automata: a statistical view of sequential pattern mining
- Fast reductions from RAMs to delegatable succinct constraint satisfaction problems
- Learning decision trees from random examples
- Approximating shortest superstrings with constraints
- An algorithmic theory of learning: Robust concepts and random projection
- Learning from positive and unlabeled examples
This page was built for publication: A theory of the learnable
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3714486)