Learning with coefficient-based regularization and \(\ell^1\)-penalty
From MaRDI portal
Publication:380980
DOI10.1007/s10444-012-9288-6zbMath1296.68128OpenAlexW2058624205MaRDI QIDQ380980
Publication date: 15 November 2013
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-012-9288-6
learning theorycoefficient-based regularization and \(\ell^1\)-penaltyconcentration estimate for error analysisunbounded sampling processes
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (10)
Statistical consistency of coefficient-based conditional quantile regression ⋮ Unnamed Item ⋮ Gradient descent for robust kernel-based regression ⋮ Sparse regularized learning in the reproducing kernel Banach spaces with the \(\ell^1\) norm ⋮ Learning rates for regularized least squares ranking algorithm ⋮ Multi-task learning in vector-valued reproducing kernel Banach spaces with the \(\ell^1\) norm ⋮ Nyström subsampling method for coefficient-based regularized regression ⋮ Moving quantile regression ⋮ Distributed learning with indefinite kernels ⋮ Optimal rates for coefficient-based regularized regression
Cites Work
- Learning by nonsymmetric kernels with data dependent spaces and \(\ell^1\)-regularizer
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Reproducing kernel Banach spaces with the \(\ell^1\) norm
- Weak convergence and empirical processes. With applications to statistics
- Concentration estimates for learning with unbounded sampling
- Learning with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression
- Multiscale kernels
- Learning theory estimates via integral operators and their approximations
- Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression
- Probability Inequalities for the Sum of Independent Random Variables
- Leave-One-Out Bounds for Kernel Methods
- Sequential Bayesian Decoding with a Population of Neurons
- Sparsity and incoherence in compressive sampling
- Learning Theory
- Theory of Reproducing Kernels
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Learning with coefficient-based regularization and \(\ell^1\)-penalty