Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
From MaRDI portal
Publication:5114401
DOI10.1137/18M1229869zbMath1445.90070arXiv1802.02724OpenAlexW3035804846MaRDI QIDQ5114401
Publication date: 22 June 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1802.02724
adaptive learningaugmented Lagrangian methoditeration complexityfunctional constraintstochastic gradient methodALMSGM
Analysis of algorithms (68W40) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Stochastic programming (90C15)
Related Items (10)
A stochastic approximation method for convex programming with many semidefinite constraints ⋮ Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs ⋮ A stochastic primal-dual method for a class of nonconvex constrained optimization ⋮ On the Convergence of Stochastic Primal-Dual Hybrid Gradient ⋮ First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems ⋮ An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints ⋮ Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization ⋮ Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games ⋮ Primal-dual incremental gradient method for nonsmooth and convex optimization problems ⋮ New convergence analysis of a primal-dual algorithm with large stepsizes
Cites Work
- Unnamed Item
- Projection on the intersection of convex sets
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- A sampling-and-discarding approach to chance-constrained optimization: feasibility and Optimality
- Subgradient methods for saddle-point problems
- Uncertain convex programs: randomized solutions and confidence levels
- Algorithms for stochastic optimization with function or expectation constraints
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Set intersection problems: supporting hyperplanes and quadratic programming
- The multiplier method of Hestenes and Powell applied to convex programming
- Stochastic First-Order Methods with Random Constraint Projection
- A Randomized Mirror-Prox Method for Solving Structured Large-Scale Matrix Saddle-Point Problems
- A Sample Approximation Approach for Optimization with Probabilistic Constraints
- Lectures on Stochastic Programming
- Robust Stochastic Approximation Approach to Stochastic Programming
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- A Level-Set Method for Convex Optimization with a Feasible Solution Path
- A dual approach to solving nonlinear programming problems by unconstrained optimization
- Proximal-Proximal-Gradient Method
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Neyman-Pearson classification, convexity and stochastic constraints
- Nonlinear Programming
This page was built for publication: Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints