Learning with ordinal-bounded memory from positive data (Q440010): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/j.jcss.2012.03.002 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2080981465 / rank | |||
Normal rank |
Revision as of 14:27, 19 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Learning with ordinal-bounded memory from positive data |
scientific article |
Statements
Learning with ordinal-bounded memory from positive data (English)
0 references
17 August 2012
0 references
The paper generalises the paradigm of `learning in the limit with bounded example memory'. Learning in the limit models the scenario of a learner (a computable device) receiving one by one the members of a recursively enumerable (r.e.) set \(L\) of natural numbers (in any order, possibly with repetitions, each member of \(L\) being presented at least once), outputting every time a new number appears a description of an r.e.\ set, and having to eventually keep outputting a correct, fixed description of \(L\). In the paradigm of bounded example memory, the learner can only base its conjecture on the previous conjecture, the new datum \(x\) it receives, and a set \(D\) of data which have been previously received and memorised, while updating its memory to some subset of \(D\cup\{x\}\) (so the new datum can be memorised or not, and any memorised datum can still be memorised or not). The size of the memory set can be bound by an integer \(k\), and the paradigm becomes more and more stringent as \(k\) decreases; it is equivalent to the paradigm of `iterative learning' when \(k=0\) (every conjecture is then based on the last conjecture and the new datum). In the proposed generalisation, the learner has to also output representations of smaller and smaller ordinals every time the memory becomes larger than it ever was (as data can be forgotten, memory can shrink, and an ordinal representation is output not when memory increases, but when it increases beyond the largest size it ever reached). The first ordinal representation to start with is fixed to some element \(a\), giving rise to the paradigm of `\(a\)-bounded example memory' learning. The ordinal representations are determined by Kleene's system of notation \(\mathcal O\) of all constructive ordinals (a notation is the index of a program which allows one to describe -- not in a unique way -- an ordinal from a notation of its predecessor or in the case of a limit ordinal \(\alpha\), from a sequence of notations of ordinals whose limit is \(\alpha\)). For a representation \(\underline{k}\) of a natural number \(k\), \(\underline{k}\)-bounded example memory learning is equivalent to bounded example memory learning with a memory bound of \(k\). The first main result of the paper is that for any ordinal notation \(a\), \(a\)-bounded example memory learning is weaker than learning in the limit with no bound on the set of memorised data (which is equivalent to remembering all data presented, but not the order of presentation). The second main result of the paper is that given two ordinal notations \(a\) and \(b\) with \(a<_{\mathcal O}b\), then \(a\)-bounded example memory learning is weaker than \(b\)-bounded example memory learning (\(<_{\mathcal O}\) is an r.e. relation on ordinal notations that is consistent with the order relation on ordinals, that is, if \(a<_{\mathcal O}b\) then the ordinal represented by \(a\) is smaller than the ordinal represented by \(b\)). Then the authors show that if \(a\) and \(b\) are two notations for a given ordinal smaller than \(\omega^2\), then \(a\)-bounded example memory learning and \(b\)-bounded example memory learning are equivalent, reflecting the fact that notations below \(\omega^2\) are essentially unique. In contrast, for ordinals above \(\omega^2\), the concepts are notation dependent, as it is shown that for all notations \(a\), there exists a class of languages which cannot be learned by any learner using an \(a\)-bounded example memory, but which can be learnt using a \(b\)-bounded example memory where \(b\) is a notation for \(\omega^2\). The paper ends with a result on cumulative learners, which can only remove a (unique) memorised datum by letting the new datum replace it, hence memory can remain of the same size or increase by one, but it can never decrease. This is shown using techniques from Kolmogorov complexity, to be a restriction on the paradigm of bounded example memory learning for a memory bound as small as 2.
0 references
inductive inference
0 references
bounded example memory
0 references
constructive ordinals
0 references
Kolmogorov complexity
0 references