A note on stability of error bounds in statistical learning theory
DOI10.1142/S0219530511001893zbMATH Open1267.68185OpenAlexW2046237749WikidataQ60700494 ScholiaQ60700494MaRDI QIDQ3096969FDOQ3096969
Authors: M. Li, Andrea Caponnetto
Publication date: 15 November 2011
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530511001893
Recommendations
- STABILITY RESULTS IN LEARNING THEORY
- 10.1162/153244302760200704
- Stability of unstable learning algorithms
- A survey on learning theory. I: Stability and generalization
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
Learning and adaptive systems in artificial intelligence (68T05) Coding and information theory (compaction, compression, models of communication, encoding schemes, etc.) (aspects in computer science) (68P30)
Cites Work
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Cross-validation based adaptation for regularization operators in learning theory
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
Cited In (3)
This page was built for publication: A note on stability of error bounds in statistical learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3096969)