The Learning Rate of lp -coefficient Regularized Shannon Sampling Algorithm
From MaRDI portal
Publication:5259986
DOI10.11845/SXJZ.2012189BzbMATH Open1324.68114OpenAlexW2890743753MaRDI QIDQ5259986FDOQ5259986
Authors: Bao-Huai Sheng
Publication date: 29 June 2015
Full work available at URL: http://www.oaj.pku.edu.cn/sxjz/EN/10.11845/sxjz.2012189b
Recommendations
- The convergence rates of Shannon sampling learning algorithms
- Shannon sampling. II: Connections to learning theory
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
- An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series
- Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
- Improved bounds on the sample complexity of learning
- Publication:4952633
Learning and adaptive systems in artificial intelligence (68T05) Sampling theory in information and communication theory (94A20)
Cited In (5)
- The convergence rates of Shannon sampling learning algorithms
- An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series
- The performance of semi-supervised Laplacian regularized regression with the least square loss
- Shannon sampling. II: Connections to learning theory
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
This page was built for publication: The Learning Rate of lp -coefficient Regularized Shannon Sampling Algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5259986)