General convergence analysis of stochastic first-order methods for composite optimization
From MaRDI portal
Abstract: In this paper we consider stochastic composite convex optimization problems with the objective function satisfying a stochastic bounded gradient condition, with or without a quadratic functional growth property. These models include the most well-known classes of objective functions analyzed in the literature: non-smooth Lipschitz functions and composition of a (potentially) non-smooth function and a smooth function, with or without strong convexity. Based on the flexibility offered by our optimization model we consider several variants of stochastic first order methods, such as the stochastic proximal gradient and the stochastic proximal point algorithms. Usually, the convergence theory for these methods has been derived for simple stochastic optimization models satisfying restrictive assumptions, the rates are in general sublinear and hold only for specific decreasing stepsizes. Hence, we analyze the convergence rates of stochastic first order methods with constant or variable stepsize under general assumptions covering a large class of objective functions. For constant stepsize we show that these methods can achieve linear convergence rate up to a constant proportional to the stepsize and under some strong stochastic bounded gradient condition even pure linear convergence. Moreover, when a variable stepsize is chosen we derive sublinear convergence rates for these stochastic first order methods. Finally, the stochastic gradient mapping and the Moreau smoothing mapping introduced in the present paper lead to simple and intuitive proofs.
Recommendations
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Unified analysis of stochastic gradient methods for composite convex and smooth optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Convergence analysis of a global optimization algorithm using stochastic differential equations
- Convergence properties of stochastic optimization procedures
- Convergence of simultaneous perturbation stochastic approximation for nondifferentiable optimization
- Iterative decomposition of stochastic optimization problems with first-order partial differential equations
- Convergence of an Inexact Algorithm for Composite Nonsmooth Optimization
Cites work
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- A proximal stochastic gradient method with progressive variance reduction
- An optimal method for stochastic composite optimization
- Computational complexity of inexact gradient augmented Lagrangian methods: application to constrained MPC
- Convergence of stochastic proximal gradient algorithm
- Convex analysis and monotone operator theory in Hilbert spaces
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Efficient online and batch learning using forward backward splitting
- Energy-based sensor network source localization via projection onto convex sets
- First-order methods of smooth convex optimization with inexact oracle
- Introductory lectures on convex optimization. A basic course.
- Linear convergence of first order methods for non-strongly convex optimization
- Minimization of unsmooth functionals
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- On perturbed proximal gradient algorithms
- Optimization methods for large-scale machine learning
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Randomized projection methods for convex feasibility: conditioning and convergence rates
- Robust Stochastic Approximation Approach to Stochastic Programming
- The solution path of the generalized lasso
- Weak Sharp Minima in Mathematical Programming
Cited in
(11)- Mini-batch stochastic subgradient for functional constrained optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Unified analysis of stochastic gradient methods for composite convex and smooth optimization
- Convergence analysis of stochastic higher-order majorization–minimization algorithms
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Stochastic first-order methods with random constraint projection
- On the linear convergence of the stochastic gradient method with constant step-size
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- A stochastic moving ball approximation method for smooth convex constrained minimization
- The value function approach to convergence analysis in composite optimization
- Iterative decomposition of stochastic optimization problems with first-order partial differential equations
This page was built for publication: General convergence analysis of stochastic first-order methods for composite optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2032020)