Efficient algorithms for learning functions with bounded variation
From MaRDI portal
Recommendations
- Improved bounds about on-line learning of smooth functions of a single variable
- Improved bounds about on-line learning of smooth-functions of a single variable
- On the learnability of rich function classes
- Bounds on the Number of Examples Needed for Learning Functions
- Characterizations of learnability for classes of \(\{0,\dots,n\}\)-valued functions
Cites work
- scientific article; zbMATH DE number 51427 (Why is no real title available?)
- scientific article; zbMATH DE number 3436645 (Why is no real title available?)
- scientific article; zbMATH DE number 3195517 (Why is no real title available?)
- A generalization of Sauer's lemma
- A theory of the learnable
- Bounds on the Number of Examples Needed for Learning Functions
- Convergence of stochastic processes
- Covering numbers for real-valued function classes
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Distribution inequalities for the binomial law
- Efficient distribution-free learning of probabilistic concepts
- Equivalence of models for polynomial learnability
- General bounds on the number of examples needed for learning probabilistic concepts
- Learnability and the Vapnik-Chervonenkis dimension
- Neural Network Learning
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- Prediction, learning, uniform convergence, and scale-sensitive dimensions
- Scale-sensitive dimensions, uniform convergence, and learnability
- The complexity of learning according to two models of a drifting environment
- The importance of convexity in learning with squared loss
- Toward efficient agnostic learning
Cited in
(3)
This page was built for publication: Efficient algorithms for learning functions with bounded variation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1887165)