Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer

From MaRDI portal
Publication:543025


DOI10.11650/twjm/1500406018zbMath1221.68204MaRDI QIDQ543025

Quan-Wu Xiao, Ding-Xuan Zhou

Publication date: 21 June 2011

Published in: Taiwanese Journal of Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.11650/twjm/1500406018


62J02: General nonlinear regression

68T05: Learning and adaptive systems in artificial intelligence


Related Items

Distributed learning with partial coefficients regularization, Learning with Convex Loss and Indefinite Kernels, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning with coefficient-based regularization and \(\ell^1\)-penalty, Least squares regression with \(l_1\)-regularizer in sum space, Convergence rate of the semi-supervised greedy algorithm, Constructive analysis for coefficient regularization regression algorithms, Classification with polynomial kernels and \(l^1\)-coefficient regularization, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Least square regression with indefinite kernels and coefficient regularization, Unified approach to coefficient-based regularized regression, Learning theory approach to a system identification problem involving atomic norm, A simpler approach to coefficient regularized support vector machines regression, Indefinite kernel network with \(l^q\)-norm regularization, Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Support vector machines regression with \(l^1\)-regularizer, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, Learning rates for least square regressions with coefficient regularization, On the convergence rate of kernel-based sequential greedy regression, On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization, Regularized ranking with convex losses and \(\ell^1\)-penalty, Kernel-based sparse regression with the correntropy-induced loss