Consistency of learning algorithms using Attouch-Wets convergence
From MaRDI portal
Publication:3225085
Recommendations
- Learnability, stability and uniform convergence
- On regularization algorithms in learning theory
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Iterative regularization for learning with convex loss functions
- A survey on learning theory. I: Stability and generalization
Cites work
- scientific article; zbMATH DE number 4170917 (Why is no real title available?)
- scientific article; zbMATH DE number 3901506 (Why is no real title available?)
- A Statistical View of Some Chemometrics Regression Tools
- A dynamical approach to convex minimization coupling approximation with the steepest descent method
- An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector-valued regimes and an application to color image inpainting
- Epigraphical and Uniform Convergence of Convex Functions
- Fast rates for support vector machines using Gaussian kernels
- Learning Theory
- Model selection for regularized least-squares algorithm in learning theory
- Networks and the best approximation property
- On the mathematical foundations of learning
- Quantitative Stability of Variational Systems II. A Framework for Nonlinear Conditioning
- Quantitative Stability of Variational Systems: I. The Epigraphical Distance
- Quantitative stability of variational systems. III: \(\varepsilon\)- approximate solutions
- Stability of $\varepsilon$-approximate Solutions to Convex Stochastic Programs
- Theory of Classification: a Survey of Some Recent Advances
This page was built for publication: Consistency of learning algorithms using Attouch-Wets convergence
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3225085)