scientific article
From MaRDI portal
Publication:3704910
zbMath0582.68047MaRDI QIDQ3704910
Publication date: 1982
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
recursive languagesinference hierarchiesapproximate decision procedureinference of grammarsperfectly correct decision proceduresr.e. languages
Formal languages and automata (68Q45) Automata and formal grammars in connection with logical questions (03D05) Recursive functions and relations, subrecursive hierarchies (03D20)
Related Items (78)
Intrinsic complexity of learning geometrical concepts from positive data ⋮ Learning Families of Closed Sets in Matroids ⋮ Program size restrictions in computational learning ⋮ Learnability: Admissible, co-finite, and hypersimple languages ⋮ Learning in the presence of inaccurate information ⋮ Inductive inference and reverse mathematics ⋮ Results on memory-limited U-shaped learning ⋮ Strongly non-U-shaped language learning results by general techniques ⋮ On aggregating teams of learning machines ⋮ Learning pattern languages over groups ⋮ Prudence in vacillatory language identification ⋮ Learning from Positive Data and Negative Counterexamples: A Survey ⋮ Mind change speed-up for learning languages from positive data ⋮ Learning and classifying ⋮ Probabilistic language learning under monotonicity constraints ⋮ Intrinsic complexity of partial learning ⋮ Iterative learning from texts and counterexamples using additional information ⋮ Parallelism Increases Iterative Learning Power ⋮ Prescribed Learning of R.E. Classes ⋮ Learning in Friedberg Numberings ⋮ Prudence and other conditions on formal language learning ⋮ Iterative Learning of Simple External Contextual Languages ⋮ Learning indexed families of recursive languages from positive data: A survey ⋮ Learning and extending sublanguages ⋮ Learning by switching type of information. ⋮ Non-U-shaped vacillatory and team learning ⋮ Learning languages from positive data and negative counterexamples ⋮ Learnability and positive equivalence relations ⋮ Learning in the presence of partial explanations ⋮ Learning in Friedberg numberings ⋮ Monotonic and dual monotonic language learning ⋮ Iterative learning from positive data and negative counterexamples ⋮ A general comparison of language learning from examples and from queries ⋮ Learning multiple languages in groups ⋮ Learning languages from positive data and a limited number of short counterexamples ⋮ Characterizing language identification in terms of computable numberings ⋮ On the non-existence of maximal inference degrees for language identification ⋮ Learning languages from positive data and a finite number of queries ⋮ Hypothesis spaces for learning ⋮ Automatic learning from positive data and negative counterexamples ⋮ On the learnability of recursively enumerable languages from good examples ⋮ Synthesizing noise-tolerant language learners ⋮ Synthesizing learners tolerating computable noisy data ⋮ Iterative learning of simple external contextual languages ⋮ Vacillatory and BC learning on noisy data ⋮ Variations on U-shaped learning ⋮ Learnability and positive equivalence relations ⋮ Comparison of identification criteria for machine inductive inference ⋮ Learners based on transducers ⋮ Learning algebraic structures from text ⋮ Closedness properties in ex-identification ⋮ Partial learning of recursively enumerable languages ⋮ Topological separations in inductive inference ⋮ Resource restricted computability theoretic learning: Illustrative topics and problems ⋮ Input-dependence in function-learning ⋮ Hypothesis Spaces for Learning ⋮ U-shaped, iterative, and iterative-with-counter learning ⋮ Prescribed learning of r.e. classes ⋮ Parallelism increases iterative learning power ⋮ Learning Finite Variants of Single Languages from Informant ⋮ Learning Pattern Languages over Groups ⋮ Incremental learning of approximations from positive data ⋮ Learning with refutation ⋮ Iterative Learning from Texts and Counterexamples Using Additional Information ⋮ Learning from Streams ⋮ Combining Models of Approximation with Partial Learning ⋮ Maximal machine learnable classes ⋮ The synthesis of language learners. ⋮ Inductive inference of approximations for recursive concepts ⋮ Relations between Gold-style learning and query learning ⋮ Maps of restrictions for behaviourally correct learning ⋮ On the inference of approximate programs ⋮ Language learning from texts: Degrees of intrinsic complexity and their characterizations ⋮ Inductive inference with additional information. ⋮ On the role of update constraints and text-types in iterative learning ⋮ Learning languages with decidable hypotheses ⋮ Mapping monotonic restrictions in inductive inference ⋮ Normal forms for semantically witness-based learners in inductive inference
This page was built for publication: