The entropy in learning theory. Error estimates
From MaRDI portal
(Redirected from Publication:854989)
Recommendations
Cited in
(22)- Approximation by neural networks and learning theory
- Universal discretization
- Thresholding in learning theory
- \(L_2\)-norm sampling discretization and recovery of functions from RKHS with finite trace
- Adaptive estimation for nonlinear systems using reproducing kernel Hilbert spaces
- A remark on entropy numbers
- Sampling discretization of integral norms of the hyperbolic cross polynomials
- Marcinkiewicz-type discretization of \(L^p\)-norms under the Nikolskii-type inequality assumption
- On universal estimators in learning theory
- Optimal estimators in learning theory
- Weak thresholding greedy algorithms in Banach spaces
- Local entropy in learning theory
- On adaptive estimators in statistical learning theory
- A deterministic learning approach based on discrepancy.
- \(L^p\)-convergence of greedy algorithm by generalized Walsh system
- On the entropy numbers between the anisotropic spaces and the spaces of functions with mixed smoothness
- Integral norm discretization and related problems
- Sampling discretization and related problems
- A metric entropy bound is not sufficient for learnability
- Error Estimates for Multivariate Regression on Discretized Function Spaces
- Entropy numbers of functions on \([-1,1]\) with Jacobi weights
- Sampling discretization error of integral norms for function classes
This page was built for publication: The entropy in learning theory. Error estimates
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q854989)