Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
From MaRDI portal
Publication:2168686
DOI10.1016/J.ACHA.2022.07.005zbMATH Open1497.65090OpenAlexW4288045268MaRDI QIDQ2168686FDOQ2168686
Junhong Lin, Xin Guo, Ding-Xuan Zhou
Publication date: 26 August 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2022.07.005
Recommendations
weak convergenceHilbert spacerelaxation methodrandomized Kaczmarz algorithmonline gradient descent learning algorithm
Cites Work
- A randomized Kaczmarz algorithm with exponential convergence
- Fundamentals of Computerized Tomography
- On Projection Algorithms for Solving Convex Feasibility Problems
- Title not available (Why is that?)
- Online learning algorithms
- Nonparametric stochastic approximation with large step-sizes
- Title not available (Why is that?)
- Row-Action Methods for Huge and Sparse Systems and Their Applications
- Über monotone Matrixfunktionen
- Iterated Products of Projections in Hilbert Space
- Best approximation in inner product spaces
- Title not available (Why is that?)
- Functional Analysis
- Shorter Notes: Some Operator Monotone Functions
- Sparse online learning via truncated gradient
- Hilbertian convex feasibility problem: Convergence of projection methods
- The square root of a positive self-adjoint operator
- Online gradient descent learning algorithms
- Title not available (Why is that?)
- Title not available (Why is that?)
- Randomized Iterative Methods for Linear Systems
- Introduction to the peptide binding problem of computational immunology: new results
- Randomized Kaczmarz solver for noisy linear systems
- On the factorization of matrices
- On Complexity Issues of Online Learning Algorithms
- Online Regularized Classification Algorithms
- Almost sure convergence of the Kaczmarz algorithm with random measurements
- Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
- Convergence analysis for Kaczmarz-type methods in a Hilbert space framework
- Schwarz iterative methods: infinite space splittings
- Optimal rates for regularization of statistical inverse learning problems
- Learning gradients via an early stopping gradient descent method
- Randomized subspace actions and fusion frames
- Universality of deep convolutional neural networks
- Deep distributed convolutional neural networks: Universality
- Learning theory of randomized Kaczmarz algorithm
- An accelerated randomized Kaczmarz algorithm
- Fast and strong convergence of online learning algorithms
- Stochastic subspace correction in Hilbert space
- Linear convergence of the randomized sparse Kaczmarz method
- Stability and optimization error of stochastic gradient descent for pairwise learning
- Modeling interactive components by coordinate kernel polynomial models
- Title not available (Why is that?)
- Online minimum error entropy algorithm with unbounded sampling
Cited In (10)
- On convergence rate of the randomized Kaczmarz method
- Capacity dependent analysis for functional online learning algorithms
- A note on the behavior of the randomized Kaczmarz algorithm of Strohmer and Vershynin
- Convergence analysis for Kaczmarz-type methods in a Hilbert space framework
- An Optimal Scheduled Learning Rate for a Randomized Kaczmarz Algorithm
- Kaczmarz algorithm in Hilbert space
- Online regularized learning algorithm for functional data
- High probability bounds on AdaGrad for constrained weakly convex optimization
- A randomized Kaczmarz algorithm with exponential convergence
- Correction to: ``Convergence rates for Kaczmarz-type algorithms
This page was built for publication: Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2168686)