An approximation theory approach to learning with \(\ell^1\) regularization (Q1944318): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Set OpenAlex properties.
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/j.jat.2012.12.004 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W1966805857 / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 00:57, 20 March 2024

scientific article
Language Label Description Also known as
English
An approximation theory approach to learning with \(\ell^1\) regularization
scientific article

    Statements

    An approximation theory approach to learning with \(\ell^1\) regularization (English)
    0 references
    0 references
    0 references
    0 references
    5 April 2013
    0 references
    Kernel based regularization schemes with an \(l^1\)-regularizer and a general loss function are studied. Assuming that the input data space \(X\) satisfies an interior cone condition and some regularity conditions on the marginal distribution and target functions as well as an appropriate smoothness condition on the kernel, an error analysis is carried out by means of a local polynomial reproduction formula of approximation theory. An error bound is proved and learning rates are improved which are independent of the dimension of \(X\).
    0 references
    0 references
    learning theory
    0 references
    data dependent hypothesis spaces
    0 references
    kernel-based regularization scheme
    0 references
    \(\ell ^{1}\)-regularizer
    0 references
    multivariate approximation
    0 references
    data dependent hypothesis space
    0 references
    \(l^1\)-regularizer multivariate approximation
    0 references

    Identifiers