Learning with ordinal-bounded memory from positive data (Q440010): Difference between revisions
From MaRDI portal
Created a new Item |
ReferenceBot (talk | contribs) Changed an Item |
||
(7 intermediate revisions by 7 users not shown) | |||
Property / author | |||
Property / author: Sanjay Jain / rank | |||
Property / author | |||
Property / author: Sanjay Jain / rank | |||
Normal rank | |||
Property / review text | |||
The paper generalises the paradigm of `learning in the limit with bounded example memory'. Learning in the limit models the scenario of a learner (a computable device) receiving one by one the members of a recursively enumerable (r.e.) set \(L\) of natural numbers (in any order, possibly with repetitions, each member of \(L\) being presented at least once), outputting every time a new number appears a description of an r.e.\ set, and having to eventually keep outputting a correct, fixed description of \(L\). In the paradigm of bounded example memory, the learner can only base its conjecture on the previous conjecture, the new datum \(x\) it receives, and a set \(D\) of data which have been previously received and memorised, while updating its memory to some subset of \(D\cup\{x\}\) (so the new datum can be memorised or not, and any memorised datum can still be memorised or not). The size of the memory set can be bound by an integer \(k\), and the paradigm becomes more and more stringent as \(k\) decreases; it is equivalent to the paradigm of `iterative learning' when \(k=0\) (every conjecture is then based on the last conjecture and the new datum). In the proposed generalisation, the learner has to also output representations of smaller and smaller ordinals every time the memory becomes larger than it ever was (as data can be forgotten, memory can shrink, and an ordinal representation is output not when memory increases, but when it increases beyond the largest size it ever reached). The first ordinal representation to start with is fixed to some element \(a\), giving rise to the paradigm of `\(a\)-bounded example memory' learning. The ordinal representations are determined by Kleene's system of notation \(\mathcal O\) of all constructive ordinals (a notation is the index of a program which allows one to describe -- not in a unique way -- an ordinal from a notation of its predecessor or in the case of a limit ordinal \(\alpha\), from a sequence of notations of ordinals whose limit is \(\alpha\)). For a representation \(\underline{k}\) of a natural number \(k\), \(\underline{k}\)-bounded example memory learning is equivalent to bounded example memory learning with a memory bound of \(k\). The first main result of the paper is that for any ordinal notation \(a\), \(a\)-bounded example memory learning is weaker than learning in the limit with no bound on the set of memorised data (which is equivalent to remembering all data presented, but not the order of presentation). The second main result of the paper is that given two ordinal notations \(a\) and \(b\) with \(a<_{\mathcal O}b\), then \(a\)-bounded example memory learning is weaker than \(b\)-bounded example memory learning (\(<_{\mathcal O}\) is an r.e. relation on ordinal notations that is consistent with the order relation on ordinals, that is, if \(a<_{\mathcal O}b\) then the ordinal represented by \(a\) is smaller than the ordinal represented by \(b\)). Then the authors show that if \(a\) and \(b\) are two notations for a given ordinal smaller than \(\omega^2\), then \(a\)-bounded example memory learning and \(b\)-bounded example memory learning are equivalent, reflecting the fact that notations below \(\omega^2\) are essentially unique. In contrast, for ordinals above \(\omega^2\), the concepts are notation dependent, as it is shown that for all notations \(a\), there exists a class of languages which cannot be learned by any learner using an \(a\)-bounded example memory, but which can be learnt using a \(b\)-bounded example memory where \(b\) is a notation for \(\omega^2\). The paper ends with a result on cumulative learners, which can only remove a (unique) memorised datum by letting the new datum replace it, hence memory can remain of the same size or increase by one, but it can never decrease. This is shown using techniques from Kolmogorov complexity, to be a restriction on the paradigm of bounded example memory learning for a memory bound as small as 2. | |||
Property / review text: The paper generalises the paradigm of `learning in the limit with bounded example memory'. Learning in the limit models the scenario of a learner (a computable device) receiving one by one the members of a recursively enumerable (r.e.) set \(L\) of natural numbers (in any order, possibly with repetitions, each member of \(L\) being presented at least once), outputting every time a new number appears a description of an r.e.\ set, and having to eventually keep outputting a correct, fixed description of \(L\). In the paradigm of bounded example memory, the learner can only base its conjecture on the previous conjecture, the new datum \(x\) it receives, and a set \(D\) of data which have been previously received and memorised, while updating its memory to some subset of \(D\cup\{x\}\) (so the new datum can be memorised or not, and any memorised datum can still be memorised or not). The size of the memory set can be bound by an integer \(k\), and the paradigm becomes more and more stringent as \(k\) decreases; it is equivalent to the paradigm of `iterative learning' when \(k=0\) (every conjecture is then based on the last conjecture and the new datum). In the proposed generalisation, the learner has to also output representations of smaller and smaller ordinals every time the memory becomes larger than it ever was (as data can be forgotten, memory can shrink, and an ordinal representation is output not when memory increases, but when it increases beyond the largest size it ever reached). The first ordinal representation to start with is fixed to some element \(a\), giving rise to the paradigm of `\(a\)-bounded example memory' learning. The ordinal representations are determined by Kleene's system of notation \(\mathcal O\) of all constructive ordinals (a notation is the index of a program which allows one to describe -- not in a unique way -- an ordinal from a notation of its predecessor or in the case of a limit ordinal \(\alpha\), from a sequence of notations of ordinals whose limit is \(\alpha\)). For a representation \(\underline{k}\) of a natural number \(k\), \(\underline{k}\)-bounded example memory learning is equivalent to bounded example memory learning with a memory bound of \(k\). The first main result of the paper is that for any ordinal notation \(a\), \(a\)-bounded example memory learning is weaker than learning in the limit with no bound on the set of memorised data (which is equivalent to remembering all data presented, but not the order of presentation). The second main result of the paper is that given two ordinal notations \(a\) and \(b\) with \(a<_{\mathcal O}b\), then \(a\)-bounded example memory learning is weaker than \(b\)-bounded example memory learning (\(<_{\mathcal O}\) is an r.e. relation on ordinal notations that is consistent with the order relation on ordinals, that is, if \(a<_{\mathcal O}b\) then the ordinal represented by \(a\) is smaller than the ordinal represented by \(b\)). Then the authors show that if \(a\) and \(b\) are two notations for a given ordinal smaller than \(\omega^2\), then \(a\)-bounded example memory learning and \(b\)-bounded example memory learning are equivalent, reflecting the fact that notations below \(\omega^2\) are essentially unique. In contrast, for ordinals above \(\omega^2\), the concepts are notation dependent, as it is shown that for all notations \(a\), there exists a class of languages which cannot be learned by any learner using an \(a\)-bounded example memory, but which can be learnt using a \(b\)-bounded example memory where \(b\) is a notation for \(\omega^2\). The paper ends with a result on cumulative learners, which can only remove a (unique) memorised datum by letting the new datum replace it, hence memory can remain of the same size or increase by one, but it can never decrease. This is shown using techniques from Kolmogorov complexity, to be a restriction on the paradigm of bounded example memory learning for a memory bound as small as 2. / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Eric Martin / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 68Q32 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 68Q30 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6067749 / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
inductive inference | |||
Property / zbMATH Keywords: inductive inference / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
bounded example memory | |||
Property / zbMATH Keywords: bounded example memory / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
constructive ordinals | |||
Property / zbMATH Keywords: constructive ordinals / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
Kolmogorov complexity | |||
Property / zbMATH Keywords: Kolmogorov complexity / rank | |||
Normal rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/j.jcss.2012.03.002 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2080981465 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Ordinal mind change complexity of language identification / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Toward a mathematical theory of inductive inference / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Incremental Learning with Ordinal Bounded Example Memory / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Incremental concept learning for bounded data mining. / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Feasible Iteration of Feasible Learning Functionals / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: On the role of procrastination in machine learning / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Language identification in the limit / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Language learning from texts: Mindchanges, limited memory and monotonicity / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: On the Forms of the Predicates in the Theory of Constructive Ordinals / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: On the Forms of the Predicates in the Theory of Constructive Ordinals (Second Paper) / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Learning with Temporary Memory / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Incremental learning from positive data / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4337021 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5573961 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4115138 / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 13:22, 5 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Learning with ordinal-bounded memory from positive data |
scientific article |
Statements
Learning with ordinal-bounded memory from positive data (English)
0 references
17 August 2012
0 references
The paper generalises the paradigm of `learning in the limit with bounded example memory'. Learning in the limit models the scenario of a learner (a computable device) receiving one by one the members of a recursively enumerable (r.e.) set \(L\) of natural numbers (in any order, possibly with repetitions, each member of \(L\) being presented at least once), outputting every time a new number appears a description of an r.e.\ set, and having to eventually keep outputting a correct, fixed description of \(L\). In the paradigm of bounded example memory, the learner can only base its conjecture on the previous conjecture, the new datum \(x\) it receives, and a set \(D\) of data which have been previously received and memorised, while updating its memory to some subset of \(D\cup\{x\}\) (so the new datum can be memorised or not, and any memorised datum can still be memorised or not). The size of the memory set can be bound by an integer \(k\), and the paradigm becomes more and more stringent as \(k\) decreases; it is equivalent to the paradigm of `iterative learning' when \(k=0\) (every conjecture is then based on the last conjecture and the new datum). In the proposed generalisation, the learner has to also output representations of smaller and smaller ordinals every time the memory becomes larger than it ever was (as data can be forgotten, memory can shrink, and an ordinal representation is output not when memory increases, but when it increases beyond the largest size it ever reached). The first ordinal representation to start with is fixed to some element \(a\), giving rise to the paradigm of `\(a\)-bounded example memory' learning. The ordinal representations are determined by Kleene's system of notation \(\mathcal O\) of all constructive ordinals (a notation is the index of a program which allows one to describe -- not in a unique way -- an ordinal from a notation of its predecessor or in the case of a limit ordinal \(\alpha\), from a sequence of notations of ordinals whose limit is \(\alpha\)). For a representation \(\underline{k}\) of a natural number \(k\), \(\underline{k}\)-bounded example memory learning is equivalent to bounded example memory learning with a memory bound of \(k\). The first main result of the paper is that for any ordinal notation \(a\), \(a\)-bounded example memory learning is weaker than learning in the limit with no bound on the set of memorised data (which is equivalent to remembering all data presented, but not the order of presentation). The second main result of the paper is that given two ordinal notations \(a\) and \(b\) with \(a<_{\mathcal O}b\), then \(a\)-bounded example memory learning is weaker than \(b\)-bounded example memory learning (\(<_{\mathcal O}\) is an r.e. relation on ordinal notations that is consistent with the order relation on ordinals, that is, if \(a<_{\mathcal O}b\) then the ordinal represented by \(a\) is smaller than the ordinal represented by \(b\)). Then the authors show that if \(a\) and \(b\) are two notations for a given ordinal smaller than \(\omega^2\), then \(a\)-bounded example memory learning and \(b\)-bounded example memory learning are equivalent, reflecting the fact that notations below \(\omega^2\) are essentially unique. In contrast, for ordinals above \(\omega^2\), the concepts are notation dependent, as it is shown that for all notations \(a\), there exists a class of languages which cannot be learned by any learner using an \(a\)-bounded example memory, but which can be learnt using a \(b\)-bounded example memory where \(b\) is a notation for \(\omega^2\). The paper ends with a result on cumulative learners, which can only remove a (unique) memorised datum by letting the new datum replace it, hence memory can remain of the same size or increase by one, but it can never decrease. This is shown using techniques from Kolmogorov complexity, to be a restriction on the paradigm of bounded example memory learning for a memory bound as small as 2.
0 references
inductive inference
0 references
bounded example memory
0 references
constructive ordinals
0 references
Kolmogorov complexity
0 references