Learnability with respect to fixed distributions (Q809614)

From MaRDI portal
Revision as of 10:06, 24 June 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Learnability with respect to fixed distributions
scientific article

    Statements

    Learnability with respect to fixed distributions (English)
    0 references
    0 references
    0 references
    1991
    0 references
    Let D be a distribution over a set X (X is finite, countable or \(\subseteq E^ r)\), \(C\subseteq 2^ X-a\) concept class. For \(x\in X^ 1\), \(1\geq 1\) and a concept \(c\in C\) the 1-sample of c is \(sam_ c(x)=(<x_ 1,I(x_ 1)>,...,<x_ 1,I(x_ 1)>)\), where \(I_ c(x_ j)=1\) if \(x_ j\in C\) and \(I_ c(x_ j)=0\) otherwise. The sample space \(S_ c\) is the set of all l-samples of c over all \(c\in C\) and \(1\geq 1\). For an algebra H of Borel sets over X let \(F_{CH}\) be the set of all functions \(f:S_ c\to H\). The concept class C is learnable with respect (w.r.) to distribution D (over X) in terms of H if there exists a function \(f\in F_{CH}\) such that for all \(\epsilon,\delta >0\) there exists \(1>0\) such that for every \(c\in C\) and every \(x\in X^ 1\) selected according to D, \(f(sam_ c(x))\) and c are \(\epsilon\)-close w.r. to D with probability at least 1-\(\delta\) \((Y_ 1,Y_ 2\) are \(\epsilon\)-close w.r. to D if \(\Pr_ D(Y_ 1\oplus Y_ 2)<\epsilon\), where \(\oplus\) denotes the symmetric difference). A set \(H_{\epsilon}\subseteq 2^ X\) is an \(\epsilon\)-cover of C w.r. to D if for every \(c\in C\) there is an \(h\in H_{\epsilon}\) which is \(\epsilon\)- close to c. C is finitely coverable w.r. to D if for every \(\epsilon >0\) there is a finite \(\epsilon\)-cover of C. It is proved that C is learnable w.r. to D iff C is finitely coverable w.r. to D. A connection is established between the size of the cover and the Vapnik-Chervonenkis dimension of the concept class. Learning with errors (i.e. if labels in samples are independently correct with probability \(\zeta >0.5)\) is considered also.
    0 references
    0 references
    machine-learning
    0 references
    0 references