Local entropy in learning theory (Q2460502): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s11006-006-0213-5 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2052032680 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation methods for supervised learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Information-theoretic determination of minimax rates of convergence / rank
 
Normal rank

Latest revision as of 12:52, 27 June 2024

scientific article
Language Label Description Also known as
English
Local entropy in learning theory
scientific article

    Statements

    Local entropy in learning theory (English)
    0 references
    12 November 2007
    0 references
    Let \((S, \tau)\) be a metric space and let \(c >1\) be some fixed constant. For \(\varepsilon >0\), the local packing number is the quantity \[ {\overline P}_\varepsilon(c, A, S):=\sup \{ n: \exists x_1, \ldots x_n \in A, \quad \varepsilon \leq \tau (x_i, x_j) \leq c\, \varepsilon \}. \] Let further \(X={\mathbb R}^m\), \(Y=[-M,M]\), \(Z=X\times Y\) and suppose that \(\rho\) is some probability measure on \(Z\). The purpose of the paper is to approximate the regression function \(f_\rho(x)=\int_Y y\,d\rho(y| x)\), where \(\rho(y| x)\) is the conditional probability measure. It is assumed that a Borel probability measure \(\mu\) on \(X\) is fixed and that a set \(\Theta\) of admissible Borel functions is given. Under these conditions, the existence of an estimator \(f_z\) is proved for which a certain upper bound for the probability \(\rho^m\{z:\| f_z-f_\rho\| _{L_2(\mu)} >\eta \}\) holds for all \(\eta>0\). The bound is expressed in terms of the local packing number \({\overline P}_\varepsilon(20, \Theta, L_2(\mu)\).
    0 references
    0 references
    0 references
    0 references
    0 references
    entropy
    0 references
    learning
    0 references
    accuracy confidence
    0 references
    0 references
    0 references