Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces

From MaRDI portal
Revision as of 07:58, 30 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:550498


DOI10.1016/j.acha.2011.01.001zbMath1221.68201MaRDI QIDQ550498

Yong-Cai Geng, Sumit K. Garg

Publication date: 11 July 2011

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.acha.2011.01.001


62J02: General nonlinear regression

68T05: Learning and adaptive systems in artificial intelligence


Related Items

Unnamed Item, Unnamed Item, Gradient descent for robust kernel-based regression, Distributed learning with partial coefficients regularization, Learning Theory of Randomized Sparse Kaczmarz Method, Nyström subsampling method for coefficient-based regularized regression, Sparse additive machine with ramp loss, Multikernel Regression with Sparsity Constraint, Parameter choices for sparse regularization with the ℓ1 norm *, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Unnamed Item, Coefficient-based regularization network with variance loss for error, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Regularized modal regression with data-dependent hypothesis spaces, Distributed learning with indefinite kernels, Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection, Learning rates for regularized least squares ranking algorithm, Learning with Convex Loss and Indefinite Kernels, Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery, Generalization Analysis of Fredholm Kernel Regularized Classifiers, Learning Rates for Classification with Gaussian Kernels, Optimality of the rescaled pure greedy learning algorithms, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Learning sparse and smooth functions by deep sigmoid nets, Statistical consistency of coefficient-based conditional quantile regression, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Learning with coefficient-based regularization and \(\ell^1\)-penalty, Least squares regression with \(l_1\)-regularizer in sum space, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, Convergence rate of the semi-supervised greedy algorithm, Constructive analysis for coefficient regularization regression algorithms, Error analysis for \(l^q\)-coefficient regularized moving least-square regression, Learning theory approach to a system identification problem involving atomic norm, Nonparametric regression using needlet kernels for spherical data, Distributed regression learning with coefficient regularization, Error analysis for coefficient-based regularized regression in additive models, A simpler approach to coefficient regularized support vector machines regression, Indefinite kernel network with \(l^q\)-norm regularization, Constructive analysis for least squares regression with generalized \(K\)-norm regularization, Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling, On the convergence rate of kernel-based sequential greedy regression, Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm, Online pairwise learning algorithms with convex loss functions, Learning with correntropy-induced losses for regression with mixture of symmetric stable noise, Optimal rates for coefficient-based regularized regression, Approximation on variable exponent spaces by linear integral operators, Kernel-based sparse regression with the correntropy-induced loss, Distributed learning with multi-penalty regularization, On grouping effect of elastic net, Distributed semi-supervised regression learning with coefficient regularization, Modal additive models with data-driven structure identification, On reproducing kernel Banach spaces: generic definitions and unified framework of constructions, Half supervised coefficient regularization for regression learning with unbounded sampling, Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression, Learning by atomic norm regularization with polynomial kernels



Cites Work