Hierarchies of probabilistic and team FIN-learning (Q5941370): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Comparison of identification criteria for machine inductive inference / rank
 
Normal rank
Property / cites work
 
Property / cites work: Language identification in the limit / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4881599 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probability and plurality for aggregations of learning machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5839903 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3949052 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Power of Pluralism for Automatic Program Synthesis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3819052 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4013538 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4115138 / rank
 
Normal rank

Latest revision as of 19:24, 3 June 2024

scientific article; zbMATH DE number 1635558
Language Label Description Also known as
English
Hierarchies of probabilistic and team FIN-learning
scientific article; zbMATH DE number 1635558

    Statements

    Hierarchies of probabilistic and team FIN-learning (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    20 August 2001
    0 references
    A FIN-learning machine \(M\) receives successive values of the function \(f\) it is learning and at some moment outputs a conjecture which should be a correct index of \(f.\) FIN learning has two extensions: (1) If \(M\) flips fair coins and learns a function with certain probability \(p,\) we have FIN\(\langle p\rangle\)-learning. (2) When \(n\) machines simultaneously try to learn the same function \(f\) and at least \(k\) of these machines output correct indices of \(f,\) we have learning by a \([k,n]\) FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities \(p_{1},p_{2}\) (or team success ratios \(k_{1}/n_{1},k_{2}/n_{2})\) are close enough. On the other hand, there are cut-points \(r\) which make simulation of FIN\(\langle p_{2}\rangle \)by FIN\(\langle p_{1}\rangle\) impossible whenever \(p_{2}<r<p_{1}.\) Cut-points above \(\frac{10}{21}\) are known. We show that the problem for given \(k_{i},n_{i}\) to determine whether \([k_{1},n_{1}]\) FIN\(\subseteq[k_{2},n_{2}]\) FIN is algorithmically solvable. The set of all FIN cut-points is shown to be well ordered and recursive. Asymmetric teams are introduced and used as both a tool to obtain these results, and are of interest in themselves. The framework of asymmetric teams allows us to characterize intersections \([k_{1},n_{1}]\) FIN \(\cap[k_{2},n_{2}]\) FIN, unions \([k_{1},n_{1}]\) FIN \(\cup[k_{2},n_{2}]\) FIN, and memberwise unions \([k_{1},n_{1}]\) FIN\( +[k_{2},n_{2}]\) FIN, i.e. collections of all unions \(U_{1}\cup U_{2}\) where \(U_{i}\in[k_{i},n_{i}]\) FIN. Hence, we can compare the learning power of traditional FIN-teams \([k,n]\) FIN as well as all kinds of their set-theoretic combinations.
    0 references
    0 references
    team learning
    0 references
    probabilistic learning
    0 references
    inductive inference
    0 references