Convergence of stochastic proximal gradient algorithm
From MaRDI portal
Publication:2019902
Abstract: We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.
Recommendations
- Ergodic convergence of a stochastic proximal point algorithm
- scientific article; zbMATH DE number 1322672
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Convergence of Proximal-Like Algorithms
- Convergence analysis of gradient descent stochastic algorithms
- scientific article; zbMATH DE number 1341059
- New Convergence Aspects of Stochastic Gradient Algorithms
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
Cites work
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 48727 (Why is no real title available?)
- scientific article; zbMATH DE number 1043533 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- A Convergent Incremental Gradient Method with a Constant Step Size
- A Stochastic Approximation Method
- A first-order stochastic primal-dual algorithm with correction step
- A sparsity preserving stochastic gradient methods for sparse regression
- Accelerated and inexact forward-backward algorithms
- Acceleration of Stochastic Approximation by Averaging
- An optimal method for stochastic composite optimization
- Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization
- Convex analysis and monotone operator theory in Hilbert spaces
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Efficient online and batch learning using forward backward splitting
- Elastic-net regularization in learning theory
- Gradient Convergence in Gradient methods with Errors
- Minimizing finite sums with the stochastic average gradient
- Modified Fejér sequences and applications
- Nonparametric sparsity and regularization
- On perturbed proximal gradient algorithms
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Optimization methods for large-scale machine learning
- Pegasos: primal estimated sub-gradient solver for SVM
- Prediction, Learning, and Games
- Proximal methods for the latent group lasso penalty
- Proximal splitting methods in signal processing
- Regularization and Variable Selection Via the Elastic Net
- Robust Stochastic Approximation Approach to Stochastic Programming
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic approximations and perturbations in forward-backward splitting for monotone operators
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic forward-backward splitting for monotone inclusions
- Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping
- Structured sparsity through convex optimization
- Understanding machine learning. From theory to algorithms
Cited in
(46)- High-performance statistical computing in the computing environments of the 2020s
- Convergence properties of stochastic optimization procedures
- SABRINA: a stochastic subspace majorization-minimization algorithm
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems: an empirical Bayesian approach. II: Theoretical analysis
- General convergence analysis of stochastic first-order methods for composite optimization
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- Fluorescence image deconvolution microscopy via generative adversarial learning (FluoGAN)
- Federated primal dual fixed point algorithm
- The stochastic proximal distance algorithm
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Stochastic forward-backward splitting for monotone inclusions
- Proximal Gradient Methods for Machine Learning and Imaging
- Ergodic convergence of a stochastic proximal point algorithm
- The stochastic auxiliary problem principle in Banach spaces: measurability and convergence
- Almost sure convergence rates of stochastic proximal gradient descent algorithm
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A linearly convergent stochastic recursive gradient method for convex optimization
- A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
- A new regularized stochastic approximation framework for stochastic inverse problems
- New Convergence Aspects of Stochastic Gradient Algorithms
- On the convergence of stochastic primal-dual hybrid gradient
- Binary quantized network training with sharpness-aware minimization
- New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization
- Mini-batch stochastic subgradient for functional constrained optimization
- Sparse online regression algorithm with insensitive loss functions
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Stochastic proximal subgradient descent oscillates in the vicinity of its accumulation set
- Stochastic proximal gradient method FOR \(\ell_1\) regularized optimization over a sphere
- scientific article; zbMATH DE number 7733450 (Why is no real title available?)
- Privacy-preserving federated learning on lattice quantization
- scientific article; zbMATH DE number 7370578 (Why is no real title available?)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- A framework of convergence analysis of mini-batch stochastic projected gradient methods
- Stochastic proximal splitting algorithm for composite minimization
- Universal regular conditional distributions via probabilistic transformers
- Stochastic proximal-gradient algorithms for penalized mixed models
- On perturbed proximal gradient algorithms
- Analysis of Online Composite Mirror Descent Algorithm
- Sharper Bounds for Proximal Gradient Algorithms with Errors
- Stochastic block projection algorithms with extrapolation for convex feasibility problems
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- Convergence analysis of stochastic higher-order majorization–minimization algorithms
This page was built for publication: Convergence of stochastic proximal gradient algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019902)