Integral operator approach to learning theory with unbounded sampling
From MaRDI portal
Publication:371679
DOI10.1007/s11785-011-0139-0zbMath1285.68143OpenAlexW1990746306MaRDI QIDQ371679
Publication date: 10 October 2013
Published in: Complex Analysis and Operator Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11785-011-0139-0
integral operatorreproducing kernel Hilbert spacescapacity independent error boundsleast square regularized regression
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (9)
Regularized least square regression with unbounded and dependent sampling ⋮ Distributed semi-supervised regression learning with coefficient regularization ⋮ Support vector machines regression with unbounded sampling ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ Optimal convergence rates of high order Parzen windows with unbounded sampling ⋮ Spectral Theory for Gaussian Processes: Reproducing Kernels, Boundaries, and L2-Wavelet Generators with Fractional Scales ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling
Cites Work
- Unnamed Item
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Geometry on probability spaces
- Regularization in kernel learning
- A note on application of integral operator in learning theory
- Sums and Gaussian vectors
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Probability Inequalities for the Sum of Independent Random Variables
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Remarks on Inequalities for Large Deviation Probabilities
- Leave-One-Out Bounds for Kernel Methods
This page was built for publication: Integral operator approach to learning theory with unbounded sampling