A note on different covering numbers in learning theory.
From MaRDI portal
Publication:1426052
DOI10.1016/S0885-064X(03)00033-5zbMath1057.68044OpenAlexW2055979621MaRDI QIDQ1426052
Publication date: 14 March 2004
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0885-064x(03)00033-5
Related Items (13)
Estimates on compressed neural networks regression ⋮ Analysis of convergence performance of neural networks ranking algorithm ⋮ Concentration estimates for learning with unbounded sampling ⋮ Approximations of semicontinuous functions with applications to stochastic optimization and statistical estimation ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Unified approach to coefficient-based regularized regression ⋮ A discretized Tikhonov regularization method for a fractional backward heat conduction problem ⋮ Optimal rate of the regularized regression learning algorithm ⋮ The covering number for some Mercer kernel Hilbert spaces ⋮ Nonparametric nonlinear regression using polynomial and neural approximators: a numerical comparison ⋮ SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming ⋮ Estimates of learning rates of regularized regression via polyline functions ⋮ Unnamed Item
Cites Work
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- Scale-sensitive dimensions, uniform convergence, and learnability
- Probability Inequalities for Sums of Bounded Random Variables
- Theory of Reproducing Kernels
- Combinatorial methods in density estimation
This page was built for publication: A note on different covering numbers in learning theory.