Optimal convergence rate of the universal estimation error
From MaRDI portal
(Redirected from Publication:516004)
Recommendations
- scientific article; zbMATH DE number 67635
- Lower Bounds for the Empirical Minimization Algorithm
- Minimax nonparametric classification .I. Rates of convergence
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- Optimal global rates of convergence for noiseless regression estimation problems with adaptively chosen design
Cites work
- scientific article; zbMATH DE number 194093 (Why is no real title available?)
- scientific article; zbMATH DE number 194266 (Why is no real title available?)
- 10.1162/153244302760200713
- 10.1162/153244303321897690
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Covering numbers for real-valued function classes
- Introduction to empirical processes and semiparametric inference
- Local Rademacher complexities
- Necessary and Sufficient Conditions for the Uniform Convergence of Means to their Expectations
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Scale-sensitive dimensions, uniform convergence, and learnability
- Some limit theorems for empirical processes (with discussion)
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Uniform and universal Glivenko-Cantelli classes
- epsilon-entropy of convex sets and functions
Cited in
(3)
This page was built for publication: Optimal convergence rate of the universal estimation error
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q516004)