Half supervised coefficient regularization for regression learning with unbounded sampling
From MaRDI portal
Publication:2855757
DOI10.1080/00207160.2012.749985zbMath1333.68228OpenAlexW2090761569MaRDI QIDQ2855757
Publication date: 22 October 2013
Published in: International Journal of Computer Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00207160.2012.749985
learning theoryleast squares regressionsemi-supervised learningcoefficient-based regularizationunbounded sampling
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (4)
Error analysis of the moving least-squares method with non-identical sampling ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ Convergence rate of SVM for kernel-based robust regression
Cites Work
- Unnamed Item
- Integral operator approach to learning theory with unbounded sampling
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Least square regression with indefinite kernels and coefficient regularization
- On regularization algorithms in learning theory
- Elastic-net regularization in learning theory
- ERM learning with unbounded sampling
- Regularization networks and support vector machines
- The weight-decay technique in learning from data: an optimization point of view
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- Application of integral operator for regularized least-square regression
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Coefficient regularized regression with non-iid sampling
- Spectral Algorithms for Supervised Learning
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Leave-One-Out Bounds for Kernel Methods
This page was built for publication: Half supervised coefficient regularization for regression learning with unbounded sampling