Concentration estimates for the moving least-square method in learning theory
From MaRDI portal
Publication:719353
DOI10.1016/j.jat.2011.03.006zbMath1237.68108OpenAlexW2095375944MaRDI QIDQ719353
Publication date: 10 October 2011
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jat.2011.03.006
Computational learning theory (68Q32) Least squares and related methods for stochastic control systems (93E24)
Related Items
Error analysis of the moving least-squares method with non-identical sampling ⋮ Learning performance of regularized moving least square regression ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ Error analysis for \(l^q\)-coefficient regularized moving least-square regression ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Moving quantile regression ⋮ Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling
Cites Work
- Unnamed Item
- Multi-kernel regularized classifiers
- Moving least-square method in learning theory
- High order Parzen windows and randomized sampling
- The covering number in learning theory
- Weak convergence and empirical processes. With applications to statistics
- Regularization networks and support vector machines
- Statistical performance of support vector machines
- ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Learning Theory
- Support Vector Machines
- Capacity of reproducing kernel spaces in learning theory
- ONLINE LEARNING WITH MARKOV SAMPLING
- Neural Network Learning
This page was built for publication: Concentration estimates for the moving least-square method in learning theory