An approximation theory approach to learning with \(\ell^1\) regularization
From MaRDI portal
Publication:1944318
DOI10.1016/j.jat.2012.12.004zbMath1283.68308MaRDI QIDQ1944318
Hong-Yan Wang, Quan-Wu Xiao, Ding-Xuan Zhou
Publication date: 5 April 2013
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jat.2012.12.004
learning theory; multivariate approximation; \(\ell ^{1}\)-regularizer; data dependent hypothesis spaces; data dependent hypothesis space; \(l^1\)-regularizer multivariate approximation; kernel-based regularization scheme
68Q32: Computational learning theory
68T05: Learning and adaptive systems in artificial intelligence
41A63: Multidimensional problems
Related Items
Unnamed Item, Multikernel Regression with Sparsity Constraint, Unnamed Item, Learning with Convex Loss and Indefinite Kernels, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning theory approach to a system identification problem involving atomic norm, A simpler approach to coefficient regularized support vector machines regression, Stability analysis of learning algorithms for ontology similarity computation, On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization, Learning by atomic norm regularization with polynomial kernels