Integral operator approach to learning theory with unbounded sampling
DOI10.1007/S11785-011-0139-0zbMATH Open1285.68143OpenAlexW1990746306MaRDI QIDQ371679FDOQ371679
Authors: Yun-Long Feng, Shao-Gao Lv
Publication date: 10 October 2013
Published in: Complex Analysis and Operator Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11785-011-0139-0
Recommendations
- Learning theory estimates via integral operators and their approximations
- On learning with integral operators
- A note on application of integral operator in learning theory
- Concentration estimates for learning with unbounded sampling
- ERM learning with unbounded sampling
- Local learning estimates by integral operators
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Sampling inequalities for infinitely smooth functions, with applications to interpolation and machine learning
- Statistical learning methods for uniform approximation bounds in multiresolution spaces
- The bounds on the rate of uniform convergence of learning process based on complex random samples
reproducing kernel Hilbert spacesintegral operatorcapacity independent error boundsleast square regularized regression
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Regularization networks and support vector machines
- Learning Theory
- Remarks on Inequalities for Large Deviation Probabilities
- On the mathematical foundations of learning
- Title not available (Why is that?)
- Optimal rates for the regularized least-squares algorithm
- Probability Inequalities for the Sum of Independent Random Variables
- Leave-One-Out Bounds for Kernel Methods
- Shannon sampling. II: Connections to learning theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Capacity of reproducing kernel spaces in learning theory
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Sums and Gaussian vectors
- A note on application of integral operator in learning theory
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- ONLINE LEARNING WITH MARKOV SAMPLING
- Geometry on probability spaces
- Regularization in kernel learning
Cited In (15)
- Convergence rate of SVM for kernel-based robust regression
- ERM learning with unbounded sampling
- Online minimum error entropy algorithm with unbounded sampling
- Support vector machines regression with unbounded sampling
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Analysis of regression algorithms with unbounded sampling
- Optimal convergence rates of high order Parzen windows with unbounded sampling
- Concentration estimates for learning with unbounded sampling
- Distributed semi-supervised regression learning with coefficient regularization
- Regularized least square regression with unbounded and dependent sampling
- Convergence Rates for Learning Linear Operators from Noisy Data
- Online regression with unbounded sampling
- Spectral theory for Gaussian processes: reproducing kernels, boundaries, and \(\mathrm{L}^{2}\)-wavelet generators with fractional scales
- A note on application of integral operator in learning theory
- Application of integral operator for vector-valued regression learning
This page was built for publication: Integral operator approach to learning theory with unbounded sampling
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q371679)