Estimation of convergence rate for multi-regression learning algorithm
From MaRDI portal
Recommendations
- scientific article; zbMATH DE number 6178473
- Optimal rate of the regularized regression learning algorithm
- Learning rates of least-square regularized regression
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- Learning rates of least-square regularized regression with polynomial kernels
Cites work
- scientific article; zbMATH DE number 1278781 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- A new concentration result for regularized risk minimizers
- A note on different covering numbers in learning theory.
- Approximation with polynomial kernels and SVM classifiers
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Capacity of reproducing kernel spaces in learning theory
- Covering numbers for support vector machines
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Fourier series and approximation on hexagonal and triangular domains
- Learning Theory
- Learning rates for regularized classifiers using multivariate polynomial kernels
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Optimal estimators in learning theory
- Optimal rates for the regularized least-squares algorithm
- Structural risk minimization over data-dependent hierarchies
- Support vector machine soft margin classifiers: error analysis
- The covering number in learning theory
Cited in
(1)
This page was built for publication: Estimation of convergence rate for multi-regression learning algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q439762)