Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
DOI10.1142/S0219691311004213zbMATH Open1219.62003MaRDI QIDQ3087503FDOQ3087503
Authors: Bin Zou, Hai Zhang, Xiangyu Chang, Zongben Xu
Publication date: 16 August 2011
Published in: International Journal of Wavelets, Multiresolution and Information Processing (Search for Journal in Brave)
Recommendations
- 10.1162/153244302760200704
- Learning with generalization capability by kernel methods of bounded complexity
- scientific article; zbMATH DE number 6001978
- Stability and generalization of learning algorithm: a new framework of stability
- Generalization bounds of a compressed regression learning algorithm
Statistical aspects of information-theoretic topics (62B10) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Foundations and philosophical topics in statistics (62A01)
Cites Work
- Regularization networks and support vector machines
- Learning Theory
- On the mathematical foundations of learning
- Title not available (Why is that?)
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Shannon sampling and function reconstruction from point values
- Learning rates of least-square regularized regression
- Capacity of reproducing kernel spaces in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Learning from dependent observations
Cited In (10)
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Elastic-net regularization for low-rank matrix recovery
- Regularized least square algorithm with two kernels
- Attribute reduction of concept lattice based on irreducible elements
- Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations
- Stability and generalization of learning algorithm: a new framework of stability
- Title not available (Why is that?)
- 10.1162/153244302760200704
- Block-regularized repeated learning-testing for estimating generalization error
This page was built for publication: Generalization bounds of regularization algorithms derived simultaneously through hypothesis space complexity, algorithmic stability and data quality
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3087503)