Toward a mathematical theory of inductive inference
From MaRDI portal
Publication:4154852
DOI10.1016/S0019-9958(75)90261-2zbMath0375.02028WikidataQ56224664 ScholiaQ56224664MaRDI QIDQ4154852
Publication date: 1975
Published in: Information and Control (Search for Journal in Brave)
Analysis of algorithms and problem complexity (68Q25) Automata and formal grammars in connection with logical questions (03D05) Recursive functions and relations, subrecursive hierarchies (03D20) Applications of computability and recursion theory (03D80) Computability and recursion theory (03D99) Turing machines and related notions (03D10) Algorithms in computer science (68W99)
Related Items
Characterizing language identification by standardizing operations, Towards a new theory of confirmation, The complexity of finding SUBSEQ\((A)\), On the danger of half-truths, Prediction of infinite words with automata, Learning in the presence of inaccurate information, Inductive inference and reverse mathematics, A map of update constraints in inductive inference, Parallel learning of automatic classes of languages, Probability and plurality for aggregations of learning machines, Index sets in the arithmetical hierarchy, Synthesizing inductive expertise, Learning languages in a union, Strongly non-U-shaped language learning results by general techniques, Enlarging learnable classes, An approach to intrinsic complexity of uniform learning, From learning in the limit to stochastic finite learning, On the power of recursive optimizers, Saving the phenomena: Requirements that inductive inference machines not contradict known data, Learning pattern languages over groups, Paradigms of truth detection, Learning all subfunctions of a function, Classes with easily learnable subclasses, On the classification of recursive languages, A theory of formal synthesis via inductive learning, On the relative sizes of learnable sets, Learning and classifying, Kolmogorov numberings and minimal identification, Probabilistic language learning under monotonicity constraints, Noisy inference and oracles, Iterative learning from texts and counterexamples using additional information, Research in the theory of inductive inference by GDR mathematicians - A survey, Learning with ordinal-bounded memory from positive data, Prudence and other conditions on formal language learning, Optimal language learning from positive data, Two notions of correctness and their relation to testing, Learning how to separate., Separation of uniform learning classes., Learning recursive functions: A survey, Reflective inductive inference of recursive functions, Quantum inductive inference by finite automata, Absolute versus probabilistic classification in a logical setting, Some classes of term rewriting systems inferable from positive data, Learning indexed families of recursive languages from positive data: A survey, Learning and extending sublanguages, Confident and consistent partial learning of recursive functions, Learning by switching type of information., Non-U-shaped vacillatory and team learning, Learning languages from positive data and negative counterexamples, Inferring answers to queries, Taming teams with mind changes, Unscrambling the quantum omelette, Learning in the presence of partial explanations, Learning in Friedberg numberings, Anomalous learning helps succinctness, Aggregating inductive expertise on partial recursive functions, Monotonic and dual monotonic language learning, On a question about learning nearly minimal programs, Towards a mathematical theory of machine discovery from facts, Characterizing language identification in terms of computable numberings, On the non-existence of maximal inference degrees for language identification, Consistent and coherent learning with \(\delta \)-delay, Automatic learning of subclasses of pattern languages, On the power of inductive inference from good examples, Numberings optimal for learning, Hypothesis spaces for learning, On some open problems in reflective inductive inference, Iterative learning of simple external contextual languages, Incremental learning with temporary memory, Learning in the limit with lattice-structured hypothesis spaces, On some open problems in monotonic and conservative learning, Comparison of identification criteria for machine inductive inference, Some natural properties of strong-identification in inductive inference, On the inference of optimal descriptions, Input-dependence in function-learning, Toward the interpretation of non-constructive reasoning as non-monotonic learning, Model discrimination using an algorithmic information criterion, Polynomial-time inference of arbitrary pattern languages, Prescribed learning of r.e. classes, Investigations on measure-one identification of classes of languages, A note on batch and incremental learnability, On the power of probabilistic strategies in inductive inference, Incremental learning of approximations from positive data, Robust learning aided by context, Learning languages and functions by erasing, Some classes of Prolog programs inferable from positive data, Structural measures for games and process control in the branch learning model, Approximation methods in inductive inference, Note on a central lemma for learning theory, Maximal machine learnable classes, Numerical methods and questions in the organization of calculus. XII. Transl. from the Russian, One-sided error probabilistic inductive inference and reliable frequency identification, On the inference of approximate programs, Avoiding coding tricks by hyperrobust learning, Language learning from texts: Degrees of intrinsic complexity and their characterizations, Inductive inference in the limit for first-order sentences, A model for science kinematics, Extremes in the degrees of inferability, Training sequences, On the role of update constraints and text-types in iterative learning, Reflecting and self-confident inductive inference machines, Machine induction without revolutionary paradigm shifts, Simulating teams with many conjectures, Language learnability in the limit: a generalization of Gold's theorem, Noisy inference and oracles, Synthesizing learners tolerating computable noisy data, Robust learning is rich, Learnability and positive equivalence relations, Control structures in hypothesis spaces: The influence on learning, Learning algebraic structures from text, A comparison of identification criteria for inductive inference of recursive real-valued functions, Closedness properties in ex-identification, Topological separations in inductive inference, On learning of functions refutably., Learning power and language expressiveness., On an open problem in classification of languages, Trees and learning, A solution to Wiehagen's thesis, Robust learning -- rich and poor, Generalized notions of mind change complexity, Generality's price: Inescapable deficiencies in machine-learned programs, Counting extensional differences in BC-learning, On the classification of computable languages, The complexity of universal text-learners, Memory limited inductive inference machines, Learnability: Admissible, co-finite, and hypersimple languages, Automatic learners with feedback queries, `Ideal learning' of natural language: positive results about learning from positive evidence, Inside the Muchnik degrees. I: Discontinuity, learnability and constructivism, Inductive inference in the limit of empirically adequate theories, Parsimony hierarchies for inductive inference, Infinitary self-reference in learning theory, Characterization of language learning front informant under various monotonicity constraints, Ignoring data may be the only way to learn efficiently, The gap between abstract and concrete results in machine learning, Generalization versus classification, Prudence in vacillatory language identification, Machine learning of higher-order programs, Learning theory in the arithmetic hierarchy. II., Computability-theoretic learning complexity, Recursion theoretic models of learning: Some results and intuitions, PROBLEMS WITH COMPLEXITY IN GOLD'S PARADIGM OF INDUCTION Part I: Dynamic Complexity, PROBLEMS WITH COMPLEXITY IN GOLD'S PARADIGM OF INDUCTION Part II: Static Complexity, Learning secrets interactively. Dynamic modeling in inductive inference, Learning efficient logic programs, Gold-Style Learning Theory, Efficiency in the Identification in the Limit Learning Paradigm, Learning Tree Languages, Learning figures with the Hausdorff metric by fractals -- towards computable binary classification, Inductive inference and computable numberings, Prescribed Learning of R.E. Classes, Learning in Friedberg Numberings, Learning Theory and Epistemology, Absolutely no free lunches!, Inside the Muchnik degrees. II: The degree structures induced by the arithmetical hierarchy of countably continuous functions, Iterative Learning of Simple External Contextual Languages, Dynamically Delayed Postdictive Completeness and Consistency in Learning, Dynamic Modeling in Inductive Inference, Numberings Optimal for Learning, Learning families of algebraic structures from informant, Learnability and positive equivalence relations, Iterative learning from positive data and negative counterexamples, Learning languages from positive data and a limited number of short counterexamples, Robust separations in inductive inference, Measure, category and learning theory, Learnability of automatic classes, Truth-tracking by belief revision, Explanatory and creative alternatives to the MDL principle, Learning languages from positive data and a finite number of queries, Automatic learning from positive data and negative counterexamples, Robust learning with infinite additional information, Costs of general purpose learning, On the learnability of recursively enumerable languages from good examples, Synthesizing noise-tolerant language learners, Unnamed Item, Vacillatory and BC learning on noisy data, Variations on U-shaped learning, Increasing the power of uniform inductive learners, The functions of finite support: a canonical learning problem, Equivalences between learning of data and probability distributions, and their applications, Learners based on transducers, Hypothesis Spaces for Learning, Unnamed Item, Learning correction grammars, Automatic Learners with Feedback Queries, Set-driven and rearrangement-independent learning of recursive languages, Random Subgroups of Rationals, Learning Finite Variants of Single Languages from Informant, Trade-off among parameters affecting inductive inference, Difficulties in Forcing Fairness of Polynomial Time Inductive Inference, Incremental Learning with Ordinal Bounded Example Memory, Learning from Streams, Priced Learning, On learning to coordinate: random bits help, insightful normal forms, and competency isomorphisms, MINIMAL CONCEPT IDENTIFICATION AND RELIABILITY, Logic and Learning, On the power of incremental learning., Learning classes of approximations to non-recursive functions., The synthesis of language learners., Incremental concept learning for bounded data mining., Robust behaviorally correct learning., The complexity of universal text-learners., Learning by the process of elimination, Inductive inference of approximations for recursive concepts, Relations between Gold-style learning and query learning, Maps of restrictions for behaviourally correct learning, On the interplay between inductive inference of recursive functions, complexity theory and recursive numberings, Unnamed Item, Learning languages with decidable hypotheses, Mapping monotonic restrictions in inductive inference, Normal forms for semantically witness-based learners in inductive inference, Towards a map for incremental learning in the limit from positive and negative information, Learning languages in the limit from positive information with finitely many memory changes