Convergence rates of kernel conjugate gradient for random design regression
DOI10.1142/S0219530516400017zbMATH Open1349.62125arXiv1607.02387OpenAlexW2963613337MaRDI QIDQ2835985FDOQ2835985
Nicole Krämer, Gilles Blanchard
Publication date: 30 November 2016
Published in: Analysis and Applications (Singapore) (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1607.02387
Recommendations
- Conjugate gradients for kernel machines
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Fast learning rates for regularized regression algorithms
- Learning rates of least-square regularized regression
nonparametric regressionpartial least squaresreproducing kernel Hilbert spaceconjugate gradientminimax convergence rates
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Optimal stopping in statistics (62L15)
Cites Work
- An introduction to support vector machines and other kernel-based learning methods.
- The Collinearity Problem in Linear Regression. The Partial Least Squares (PLS) Approach to Generalized Inverses
- Optimal rates for the regularized least-squares algorithm
- 10.1162/15324430260185556
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
Cited In (24)
- Title not available (Why is that?)
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Title not available (Why is that?)
- Title not available (Why is that?)
- Distributed learning with indefinite kernels
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Semi-supervised learning with summary statistics
- Capacity dependent analysis for functional online learning algorithms
- Distributed least squares prediction for functional linear regression*
- Title not available (Why is that?)
- Faster Kernel Ridge Regression Using Sketching and Preconditioning
- Convergence analysis for kernel-regularized online regression associated with an RRKHS
- Optimal rates for regularization of statistical inverse learning problems
- From inexact optimization to learning via gradient concentration
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Adaptive parameter selection for kernel ridge regression
- On a regularization of unsupervised domain adaptation in RKHS
- Asymptotic analysis for affine point processes with large initial intensity
- Optimal learning rates for distribution regression
- Analysis of target data-dependent greedy kernel algorithms: convergence rates for \(f\)-, \(f \cdot P\)- and \(f/P\)-greedy
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Title not available (Why is that?)
- Accelerate stochastic subgradient method by leveraging local growth condition
- Kernel conjugate gradient methods with random projections
This page was built for publication: Convergence rates of kernel conjugate gradient for random design regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2835985)