Kernel conjugate gradient methods with random projections
From MaRDI portal
Publication:1979923
DOI10.1016/j.acha.2021.05.004OpenAlexW3164947295MaRDI QIDQ1979923
Publication date: 3 September 2021
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.01760
Learning and adaptive systems in artificial intelligence (68T05) Approximation by operators (in particular, by integral operators) (41A35) Sampling theory in information and communication theory (94A20)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Optimal rates for regularization of statistical inverse learning problems
- A simple proof of the restricted isometry property for random matrices
- Uniform uncertainty principle for Bernoulli and subgaussian ensembles
- Optimal learning rates for kernel partial least squares
- Randomized sketches for kernels: fast and optimal nonparametric regression
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Learning theory estimates via integral operators and their approximations
- Convergence rates of Kernel Conjugate Gradient for random design regression
- New and Improved Johnson–Lindenstrauss Embeddings via the Restricted Isometry Property
- 10.1162/15324430260185556
- Learning Theory
- Support Vector Machines
- Decoding by Linear Programming
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Remarks on Inequalities for Large Deviation Probabilities
- Norm Inequalities Equivalent to Heinz Inequality
- Faster Kernel Ridge Regression Using Sketching and Preconditioning
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Kernel partial least squares for stationary data
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Nyström type subsampling analyzed as a regularized projection
- An Introduction to Matrix Concentration Inequalities
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Methods of conjugate gradients for solving linear systems
- An operator inequality