The convergence rates of Shannon sampling learning algorithms
DOI10.1007/S11425-012-4371-5zbMATH Open1288.68199OpenAlexW2266252408MaRDI QIDQ2392948FDOQ2392948
Authors: Bao-Huai Sheng
Publication date: 5 August 2013
Published in: Science China. Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11425-012-4371-5
Recommendations
- Shannon sampling. II: Connections to learning theory
- The Learning Rate of lp -coefficient Regularized Shannon Sampling Algorithm
- On regularization algorithms in learning theory
- Learning theory estimates via integral operators and their approximations
- Learning rates of Tikhonov regularized regressions based on sample dependent RKHS
learning theoryreproducing kernel Hilbert spacesregularization errorsample errorfunction reconstructionShannon sampling learning algorithm
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Computational learning theory (68Q32) Rate of convergence, degree of approximation (41A25)
Cites Work
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Theory of Reproducing Kernels
- On the mathematical foundations of learning
- Scattered Data Approximation
- Title not available (Why is that?)
- Shannon sampling and function reconstruction from point values
- Shannon sampling. II: Connections to learning theory
- Mean convergence of Hermite-Fejér interpolation
- The covering number in learning theory
- Capacity of reproducing kernel spaces in learning theory
- Mercer theorem for RKHS on noncompact sets
- Error estimates for scattered data interpolation on spheres
- Learning rates of least-square regularized regression with polynomial kernels
- Whittaker-Kotelnikov-Shannon sampling theorem and aliasing error
- On the rate of convergence for multi-category classification based on convex losses
- Norm estimates for the inverses of a general class of scattered-data radial-function interpolation matrices
- Lower bounds for norms of inverses of interpolation matrices for radial basis functions
- On condition numbers associated with radial-function interpolation
- On summability of weighted Lagrange interpolation. I
- On summability of weighted Lagrange interpolation. III: Jacobi weights
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- On summability of weighted Lagrange interpolation. II.
- On the convergence and saturation problem of a sequence of discrete linear operators of exponential type in \(L_ p(-\infty,\infty)\) spaces
- Title not available (Why is that?)
- Estimates of the norm of the Mercer kernel matrices with discrete orthogonal transforms
- Multivariate irregular sampling theorem
- Generalization performance of graph-based semi-supervised classification
- Title not available (Why is that?)
- Riesz basis, Paley-Wiener class and tempered splines
- Recovery of band limited functions via cardinal splines
- Sampling expansions for functions having values in a Banach space
- Applications of Bernstein-Durrmeyer operators in estimating the covering number
- Applications of the Bernstein-Durrmeyer operators in estimating the norm of Mercer kernel matrices
Cited In (7)
- The convergence rate for a \(K\)-functional in learning theory
- Error analysis on Hérmite learning with gradient data
- An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series
- The performance of semi-supervised Laplacian regularized regression with the least square loss
- Shannon sampling. II: Connections to learning theory
- Existence and regularity of solutions to semi-linear Dirichlet problem of infinitely degenerate elliptic operators with singular potential term
- The Learning Rate of lp -coefficient Regularized Shannon Sampling Algorithm
This page was built for publication: The convergence rates of Shannon sampling learning algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2392948)