Convergence of stochastic proximal gradient algorithm
From MaRDI portal
(Redirected from Publication:2019902)
Abstract: We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.
Recommendations
- Ergodic convergence of a stochastic proximal point algorithm
- scientific article; zbMATH DE number 1322672
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Convergence of Proximal-Like Algorithms
- Convergence analysis of gradient descent stochastic algorithms
- scientific article; zbMATH DE number 1341059
- New Convergence Aspects of Stochastic Gradient Algorithms
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
Cites work
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 48727 (Why is no real title available?)
- scientific article; zbMATH DE number 1043533 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- A Convergent Incremental Gradient Method with a Constant Step Size
- A Stochastic Approximation Method
- A first-order stochastic primal-dual algorithm with correction step
- A sparsity preserving stochastic gradient methods for sparse regression
- Accelerated and inexact forward-backward algorithms
- Acceleration of Stochastic Approximation by Averaging
- An optimal method for stochastic composite optimization
- Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization
- Convex analysis and monotone operator theory in Hilbert spaces
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Efficient online and batch learning using forward backward splitting
- Elastic-net regularization in learning theory
- Gradient Convergence in Gradient methods with Errors
- Minimizing finite sums with the stochastic average gradient
- Modified Fejér sequences and applications
- Nonparametric sparsity and regularization
- On perturbed proximal gradient algorithms
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Optimization methods for large-scale machine learning
- Pegasos: primal estimated sub-gradient solver for SVM
- Prediction, Learning, and Games
- Proximal methods for the latent group lasso penalty
- Proximal splitting methods in signal processing
- Regularization and Variable Selection Via the Elastic Net
- Robust Stochastic Approximation Approach to Stochastic Programming
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic approximations and perturbations in forward-backward splitting for monotone operators
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic forward-backward splitting for monotone inclusions
- Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping
- Structured sparsity through convex optimization
- Understanding machine learning. From theory to algorithms
Cited in
(51)- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- General convergence analysis of stochastic first-order methods for composite optimization
- New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization
- Mini-batch stochastic subgradient for functional constrained optimization
- Binary quantized network training with sharpness-aware minimization
- Stochastic forward-backward splitting for monotone inclusions
- Iterative regularization in classification via hinge loss diagonal descent
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Analysis of Online Composite Mirror Descent Algorithm
- Proximal Gradient Methods for Machine Learning and Imaging
- Convergence properties of stochastic optimization procedures
- scientific article; zbMATH DE number 7733450 (Why is no real title available?)
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- Convergence analysis of stochastic higher-order majorization–minimization algorithms
- Fluorescence image deconvolution microscopy via generative adversarial learning (FluoGAN)
- On perturbed proximal gradient algorithms
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Differentially private regularized stochastic convex optimization with heavy-tailed data
- Almost sure convergence rates of stochastic proximal gradient descent algorithm
- scientific article; zbMATH DE number 7370578 (Why is no real title available?)
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- The stochastic auxiliary problem principle in Banach spaces: measurability and convergence
- On the convergence of stochastic primal-dual hybrid gradient
- A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
- Ergodic convergence of a stochastic proximal point algorithm
- Sharper Bounds for Proximal Gradient Algorithms with Errors
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- Stochastic block projection algorithms with extrapolation for convex feasibility problems
- Maximum likelihood estimation of regularization parameters in high-dimensional inverse problems: an empirical Bayesian approach. II: Theoretical analysis
- Sequential sample average majorization-minimization
- Privacy-preserving federated averaging on heterogeneous data
- A new random reshuffling method for nonsmooth nonconvex finite-sum optimization
- Stochastic proximal subgradient descent oscillates in the vicinity of its accumulation set
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- A framework of convergence analysis of mini-batch stochastic projected gradient methods
- Universal regular conditional distributions via probabilistic transformers
- Federated primal dual fixed point algorithm
- The stochastic proximal distance algorithm
- New Convergence Aspects of Stochastic Gradient Algorithms
- A new regularized stochastic approximation framework for stochastic inverse problems
- Stochastic proximal-gradient algorithms for penalized mixed models
- Stochastic proximal splitting algorithm for composite minimization
- Sparse online regression algorithm with insensitive loss functions
- Privacy-preserving federated learning on lattice quantization
- A linearly convergent stochastic recursive gradient method for convex optimization
- High-performance statistical computing in the computing environments of the 2020s
- SABRINA: a stochastic subspace majorization-minimization algorithm
- Stochastic proximal gradient method FOR \(\ell_1\) regularized optimization over a sphere
This page was built for publication: Convergence of stochastic proximal gradient algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019902)