Least square regression with indefinite kernels and coefficient regularization (Q617706): Difference between revisions

From MaRDI portal
m rollbackEdits.php mass rollback
Tag: Rollback
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/j.acha.2010.04.001 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2045386260 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theory of Reproducing Kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: On regularization algorithms in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: 10.1162/153244302760200650 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the mathematical foundations of learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Best choices for regularization parameters in learning theory: on the bias-variance problem. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization networks and support vector machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093181 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Spectral Algorithms for Supervised Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Support vector machine classification with indefinite kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4826695 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Shannon sampling and function reconstruction from point values / rank
 
Normal rank
Property / cites work
 
Property / cites work: Shannon sampling. II: Connections to learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning theory estimates via integral operators and their approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularized least square regression with dependent samples / rank
 
Normal rank
Property / cites work
 
Property / cites work: Application of integral operator for regularized least-square regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on application of integral operator in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4261789 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning rates of least-square regularized regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-kernel regularized classifiers / rank
 
Normal rank
Property / cites work
 
Property / cites work: SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning with sample dependent hypothesis spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3174152 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2880875 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Leave-One-Out Bounds for Kernel Methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning Bounds for Kernel Regression Using Effective Data Dimensionality / rank
 
Normal rank

Latest revision as of 15:02, 3 July 2024

scientific article
Language Label Description Also known as
English
Least square regression with indefinite kernels and coefficient regularization
scientific article

    Statements

    Least square regression with indefinite kernels and coefficient regularization (English)
    0 references
    0 references
    0 references
    13 January 2011
    0 references
    Let \((y_i,x_i)_{i=1}^m\) be i.i.d. observations with \(y_i\in\mathbb R\), \(x_i\in X\), \(X\) being some compact metric space. The authors consider estimates \(f_z\) for the regression function \(f_\rho(x)=E(y_i|x_i)\), where \(f_z=f_{\alpha^z}\), \(f_\alpha(x)=\sum_{i=1}^m \alpha_i K(x,x_i)\), \[ \alpha^z=\arg\min_{\alpha\in R^m} {1\over m}\,\sum_{i=1}^m (y_i - f_\alpha(x_i))^2+\lambda m\sum_{i=1}^m\alpha_i^2, \] \(K:X\times X\to\mathbb R\) is a continuous bounded function (kernel), \(\lambda\) is a regularization parameter. Consistency of \(f_z\) is demonstrated under the assumptions that \(\lambda=\lambda(m)\to 0\), \(\lambda^{3/2}\sqrt{m}\to\infty\) and the true regression function belongs to the closure of \(\{f_\alpha\}\) in some suitable reproducing kernel Hilbert space. To analyze the rates of convergence the authors make assumptions of the form \( E\|L^{-r}f_\rho(x_i)\|^2<\infty\) for some \(r>0\), where \(L f(x)=E \tilde K(x,x_i)f(x_i)\), \(\tilde K(x,t)=E_u K(x,u)K(t,u) \). E.g. if \(r>1\) then choosing \(\lambda=m^{1/5}\) they get \(\|f_z-f_\rho\|_{L^2}=O(m^{-1/5})\). Results of simulations are presented for \(X=[0,1]\) and the Gaussian kernel \(K\).
    0 references
    Mercer kernel
    0 references
    integral operator
    0 references
    learning rates
    0 references
    numerical examples
    0 references
    indefinite kernel
    0 references
    coefficient regularization
    0 references
    least square regression
    0 references
    capacity independent error bounds
    0 references
    regression function
    0 references
    reproducing kernel Hilbert space
    0 references
    convergence
    0 references
    Gaussian kernel
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers