Convergence of stochastic proximal gradient algorithm
From MaRDI portal
Publication:2019902
DOI10.1007/S00245-019-09617-7zbMATH Open1465.90101arXiv1403.5074OpenAlexW2980398138WikidataQ127020378 ScholiaQ127020378MaRDI QIDQ2019902FDOQ2019902
Silvia Villa, Băng Công Vũ, Lorenzo Rosasco
Publication date: 22 April 2021
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Abstract: We prove novel convergence results for a stochastic proximal gradient algorithm suitable for solving a large class of convex optimization problems, where a convex objective function is given by the sum of a smooth and a possibly non-smooth component. We consider the iterates convergence and derive non asymptotic bounds in expectation in the strongly convex case, as well as almost sure convergence results under weaker assumptions. Our approach allows to avoid averaging and weaken boundedness assumptions which are often considered in theoretical studies and might not be satisfied in practice.
Full work available at URL: https://arxiv.org/abs/1403.5074
Recommendations
- Ergodic convergence of a stochastic proximal point algorithm
- scientific article; zbMATH DE number 1322672
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Convergence of Proximal-Like Algorithms
- Convergence analysis of gradient descent stochastic algorithms
- scientific article; zbMATH DE number 1341059
- New Convergence Aspects of Stochastic Gradient Algorithms
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
Cites Work
- Pegasos: primal estimated sub-gradient solver for SVM
- Regularization and Variable Selection Via the Elastic Net
- Prediction, Learning, and Games
- Convex analysis and monotone operator theory in Hilbert spaces
- Nonparametric sparsity and regularization
- Title not available (Why is that?)
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- Structured sparsity through convex optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Title not available (Why is that?)
- Understanding Machine Learning
- Proximal Splitting Methods in Signal Processing
- Signal Recovery by Proximal Forward-Backward Splitting
- Accelerated and inexact forward-backward algorithms
- Dual averaging methods for regularized stochastic learning and online optimization
- Title not available (Why is that?)
- An optimal method for stochastic composite optimization
- Gradient Convergence in Gradient methods with Errors
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- A Convergent Incremental Gradient Method with a Constant Step Size
- Stochastic forward-backward splitting for monotone inclusions
- Title not available (Why is that?)
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stochastic approximations and perturbations in forward-backward splitting for monotone operators
- Efficient online and batch learning using forward backward splitting
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- An optimal algorithm for stochastic strongly-convex optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- On perturbed proximal gradient algorithms
- Stochastic Estimation of the Maximum of a Regression Function
- Elastic-net regularization in learning theory
- A First-Order Stochastic Primal-Dual Algorithm with Correction Step
- Proximal methods for the latent group lasso penalty
- Minimizing finite sums with the stochastic average gradient
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Optimization Methods for Large-Scale Machine Learning
- Modified Fejér sequences and applications
Cited In (40)
- General convergence analysis of stochastic first-order methods for composite optimization
- The Stochastic Auxiliary Problem Principle in Banach Spaces: Measurability and Convergence
- New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization
- Mini-batch stochastic subgradient for functional constrained optimization
- Binary quantized network training with sharpness-aware minimization
- Stochastic forward-backward splitting for monotone inclusions
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Analysis of Online Composite Mirror Descent Algorithm
- Proximal Gradient Methods for Machine Learning and Imaging
- Title not available (Why is that?)
- Convergence properties of stochastic optimization procedures
- A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization
- Convergence analysis of stochastic higher-order majorization–minimization algorithms
- Fluorescence image deconvolution microscopy via generative adversarial learning (FluoGAN)
- Maximum Likelihood Estimation of Regularization Parameters in High-Dimensional Inverse Problems: An Empirical Bayesian Approach. Part II: Theoretical Analysis
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- Almost sure convergence rates of stochastic proximal gradient descent algorithm
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Title not available (Why is that?)
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Sharper Bounds for Proximal Gradient Algorithms with Errors
- Ergodic convergence of a stochastic proximal point algorithm
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- Title not available (Why is that?)
- Stochastic block projection algorithms with extrapolation for convex feasibility problems
- Stochastic proximal subgradient descent oscillates in the vicinity of its accumulation set
- Universal regular conditional distributions via probabilistic transformers
- Convergence analysis of the stochastic reflected forward-backward splitting algorithm
- Federated primal dual fixed point algorithm
- New Convergence Aspects of Stochastic Gradient Algorithms
- A new regularized stochastic approximation framework for stochastic inverse problems
- Stochastic proximal-gradient algorithms for penalized mixed models
- Sparse online regression algorithm with insensitive loss functions
- Privacy-preserving federated learning on lattice quantization
- Stochastic proximal splitting algorithm for composite minimization
- A linearly convergent stochastic recursive gradient method for convex optimization
- High-performance statistical computing in the computing environments of the 2020s
- SABRINA: a stochastic subspace majorization-minimization algorithm
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
Uses Software
This page was built for publication: Convergence of stochastic proximal gradient algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2019902)