On the information-adaptive variants of the ADMM: an iteration complexity perspective
From MaRDI portal
Publication:1668725
DOI10.1007/s10915-017-0621-6zbMath1394.90447OpenAlexW2772692493MaRDI QIDQ1668725
Bo Jiang, Xiang Gao, Shu-Zhong Zhang
Publication date: 29 August 2018
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10915-017-0621-6
stochastic approximationdirect methodfirst-order methoditeration complexityalternating direction method of multipliers (ADMM)
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items
On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error, A survey on some recent developments of alternating direction method of multipliers, Zeroth-order algorithms for stochastic distributed nonconvex optimization, Zeroth-order single-loop algorithms for nonconvex-linear minimax problems, Complexity analysis of a stochastic variant of generalized alternating direction method of multipliers, First-order algorithms for convex optimization with nonseparable objective and coupled constraints, Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis, Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint, Unnamed Item, An extragradient-based alternating direction method for convex minimization, Randomized primal-dual proximal block coordinate updates, Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks, Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An optimal method for stochastic composite optimization
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- An extragradient-based alternating direction method for convex minimization
- The solution path of the generalized lasso
- On the sublinear convergence rate of multi-block ADMM
- New method of stochastic approximation type
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- A new inexact alternating directions method for monotone variational inequalities
- A simple algorithm for a class of nonsmooth convex-concave saddle-point problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers
- Random gradient-free minimization of convex functions
- On the global and linear convergence of the generalized alternating direction method of multipliers
- On the proximal Jacobian decomposition of ALM for multiple-block separable convex minimization problems and its relationship to ADMM
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Inexact Alternating Direction Methods for Image Recovery
- Online Learning and Online Convex Optimization
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- On Full Jacobian Decomposition of the Augmented Lagrangian Method for Separable Convex Programming
- First-Order Methods for Sparse Covariance Selection
- Robust Stochastic Approximation Approach to Stochastic Programming
- stochastic quasigradient methods and their application to system optimization†
- A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems
- Acceleration of Stochastic Approximation by Averaging
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Sparsity and Smoothness Via the Fused Lasso
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Convergence Rate Analysis of Several Splitting Schemes
- Convergence Rate Analysis for the Alternating Direction Method of Multipliers with a Substitution Procedure for Separable Convex Programming
- Local Linear Convergence of the Alternating Direction Method of Multipliers for Quadratic Programs
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Local Linear Convergence of the Alternating Direction Method of Multipliers on Quadratic or Linear Programs
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- On the Global Linear Convergence of the ADMM with MultiBlock Variables
- Asymptotic Distribution of Stochastic Approximation Procedures
- A Stochastic Approximation Method
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization