Prediction-preserving reducibility
DOI10.1016/0022-0000(90)90028-JzbMATH Open0722.68094OpenAlexW2020114845WikidataQ56017510 ScholiaQ56017510MaRDI QIDQ756441FDOQ756441
Manfred K. Warmuth, Leonard Pitt
Publication date: 1990
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/0022-0000(90)90028-j
computational learning theoryprediction-preserving reducibilityprobably approximately correct learning algorithm
Learning and adaptive systems in artificial intelligence (68T05) Complexity classes (hierarchies, relations among complexity classes, etc.) (68Q15)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Learnability and the Vapnik-Chervonenkis dimension
- \(\epsilon\)-nets and simplex range queries
- Learning regular sets from queries and counterexamples
- On uniform circuit complexity
- Queries and concept learning
- A taxonomy of problems with fast parallel algorithms
- Alternation
- A theory of the learnable
- One way functions and pseudorandom generators
- Complete problems for deterministic polynomial time
- A general lower bound on the number of examples needed for learning
- Computational Complexity of Probabilistic Turing Machines
- Occam's razor
- Computational limitations on learning from examples
- Complexity of automaton identification from given data
- On the complexity of minimum inference of regular sets
- Density and dimension
- Learning decision trees from random examples
- Linear programming is log-space hard for P
- On tape-bounded complexity classes and multihead finite automata
- On the Tape Complexity of Deterministic Context-Free Languages
- Space-bounded reducibility among combinatorial problems
- Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
- Analogical and inductive inference. International workshop AII '89, Reinhardsbrunn Castle, GDR, October 1-6, 1989. Proceedings
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- The minimum consistent DFA problem cannot be approximated within any polynomial
- \(\varepsilon\)-productions in context-free grammars
- On the necessity of Occam algorithms
Cited In (38)
- Three \(\sum^ P_ 2\)-complete problems in computational learning theory
- Learning fallible deterministic finite automata
- Learning orthogonal F-Horn formulas
- Learning orthogonal F-Horn formulas
- Learnability of quantified formulas.
- Equivalence of models for polynomial learnability
- Partial observability and learnability
- The learnability of description logics with equality constraints
- Prediction-hardness of acyclic conjunctive queries
- A framework for polynomial-time query learnability
- Classic learning
- The complexity of learning minor closed graph classes
- Learning unions of tree patterns using queries
- Learning unions of tree patterns using queries
- Random embedding machines for pattern recognition
- On reduction of two-parameter prediction problems
- Logical settings for concept-learning
- On the intrinsic complexity of learning recursive functions
- Title not available (Why is that?)
- Apple tasting.
- On-line learning of linear functions
- Learning commutative deterministic finite state automata in polynomial time
- On learning unions of pattern languages and tree patterns in the mistake bound model.
- Learning with unreliable boundary queries
- Approximating hyper-rectangles: Learning and pseudorandom sets
- Pac-learning non-recursive Prolog clauses
- Combinatorial variability of Vapnik-Chervonenkis classes with applications to sample compression schemes
- Structural analysis of polynomial-time query learnability
- Prefix grammars: An alternative characterization of the regular languages
- The query complexity of learning DFA
- On the necessity of Occam algorithms
- Minimizing nfa's and regular expressions
- Interactive Clustering of Linear Classes and Cryptographic Lower Bounds
- Learning elementary formal systems with queries.
- On the Hardness of Determining Small NFA’s and of Proving Lower Bounds on Their Sizes
- Cryptographic limitations on parallelizing membership and equivalence queries with applications to random-self-reductions
- Pac-learning non-recursive Prolog clauses
- Efficient learning of typical finite automata from random walks
This page was built for publication: Prediction-preserving reducibility
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q756441)