Learning rates of multi-kernel regularized regression
From MaRDI portal
Publication:974504
DOI10.1016/j.jspi.2010.03.020zbMath1206.68252OpenAlexW2047709005MaRDI QIDQ974504
Publication date: 3 June 2010
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jspi.2010.03.020
reproducing kernel Hilbert spaceslearning ratemulti-kernel regularizationRademacher chaos complexity
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (6)
Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ Reproducing kernels and choices of associated feature spaces, in the form of \(L^2\)-spaces ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Optimal convergence rates of high order Parzen windows with unbounded sampling ⋮ Approximation analysis of gradient descent algorithm for bipartite ranking ⋮ Statistical analysis of the moving least-squares method with unbounded sampling
Cites Work
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- Learning and approximation by Gaussians on Riemannian manifolds
- Fast rates for support vector machines using Gaussian kernels
- Analysis of support vector machines regression
- Ranking and empirical minimization of \(U\)-statistics
- Learning rates of least-square regularized regression
- On the mathematical foundations of learning
- Learning Theory
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- 10.1162/153244303321897690
- Neural Network Learning
- Learning Bounds for Support Vector Machines with Learned Kernels
- Theory of Reproducing Kernels
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Learning rates of multi-kernel regularized regression