Estimation of convergence rate for multi-regression learning algorithm
From MaRDI portal
Publication:439762
DOI10.1007/S11432-011-4314-8zbMath1245.68163OpenAlexW2071382058MaRDI QIDQ439762
Feilong Cao, Yongquan Zhang, Zong Ben Xu
Publication date: 17 August 2012
Published in: Science China. Information Sciences (Search for Journal in Brave)
Full work available at URL: http://engine.scichina.com/doi/10.1007/s11432-011-4314-8
Classification and discrimination; cluster analysis (statistical aspects) (62H30) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning rates for regularized classifiers using multivariate polynomial kernels
- A note on different covering numbers in learning theory.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Fourier series and approximation on hexagonal and triangular domains
- Optimal rates for the regularized least-squares algorithm
- Approximation with polynomial kernels and SVM classifiers
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- A new concentration result for regularized risk minimizers
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Covering numbers for support vector machines
- Structural risk minimization over data-dependent hierarchies
This page was built for publication: Estimation of convergence rate for multi-regression learning algorithm