Learning rates for multi-kernel linear programming classifiers
From MaRDI portal
Publication:537615
DOI10.1007/s11464-011-0103-3zbMath1215.68197OpenAlexW2034279187MaRDI QIDQ537615
Publication date: 20 May 2011
Published in: Frontiers of Mathematics in China (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11464-011-0103-3
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Inequalities in approximation (Bernstein, Jackson, Nikol'ski?-type inequalities) (41A17)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multi-kernel regularized classifiers
- On the Bayes-risk consistency of regularized boosting methods.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- On the mathematical foundations of learning
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- 10.1162/153244302760200713
- Shannon sampling and function reconstruction from point values
- Neural Network Learning
- Learning Theory
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Learning rates for multi-kernel linear programming classifiers