Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
From MaRDI portal
Publication:5114401
Abstract: Stochastic gradient method (SGM) has been popularly applied to solve optimization problems with objective that is stochastic or an average of many functions. Most existing works on SGMs assume that the underlying problem is unconstrained or has an easy-to-project constraint set. In this paper, we consider problems that have a stochastic objective and also many functional constraints. For such problems, it could be extremely expensive to project a point to the feasible set, or even compute subgradient and/or function value of all constraint functions. To find solutions of these problems, we propose a novel (adaptive) SGM based on the classical augmented Lagrangian function. Within every iteration, it inquires a stochastic subgradient of the objective, and a subgradient and the function value of one randomly sampled constraint function. Hence, the per-iteration complexity is low. We establish its convergence rate for convex problems and also problems with strongly convex objective. It can achieve the optimal convergence rate for convex case and nearly optimal rate for strongly convex case. Numerical experiments on a sample approximation problem of the robust portfolio selection and quadratically constrained quadratic programming are conducted to demonstrate its efficiency.
Recommendations
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Primal-dual mirror descent method for constraint stochastic optimization problems
- A primal-dual decomposition algorithm for multistage stochastic convex programming
- Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs
- On the convergence of stochastic primal-dual hybrid gradient
- Primal-dual algorithms for optimization with stochastic dominance
- A fully stochastic primal-dual algorithm
- Stochastic primal dual fixed point method for composite optimization
- Primal-dual stochastic distributed algorithm for constrained convex optimization
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
Cites work
- A Sample Approximation Approach for Optimization with Probabilistic Constraints
- A dual approach to solving nonlinear programming problems by unconstrained optimization
- A level-set method for convex optimization with a feasible solution path
- A randomized mirror-prox method for solving structured large-scale matrix saddle-point problems
- A sampling-and-discarding approach to chance-constrained optimization: feasibility and Optimality
- Algorithms for stochastic optimization with function or expectation constraints
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Lectures on Stochastic Programming
- Neyman-Pearson classification, convexity and stochastic constraints
- Nonlinear Programming
- Optimal primal-dual methods for a class of saddle point problems
- Projection on the intersection of convex sets
- Proximal-proximal-gradient method
- Robust Stochastic Approximation Approach to Stochastic Programming
- Scenario approximations of chance constraints
- Set intersection problems: supporting hyperplanes and quadratic programming
- Solving variational inequalities with stochastic mirror-prox algorithm
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Stochastic first-order methods with random constraint projection
- Subgradient methods for saddle-point problems
- The multiplier method of Hestenes and Powell applied to convex programming
- Uncertain convex programs: randomized solutions and confidence levels
Cited in
(21)- Primal dual methods for Wasserstein gradient flows
- Primal-dual mirror descent method for constraint stochastic optimization problems
- New convergence analysis of a primal-dual algorithm with large stepsizes
- Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems
- Mini-batch stochastic subgradient for functional constrained optimization
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization
- A stochastic approximation method for convex programming with many semidefinite constraints
- First-order methods for problems with \(O(1)\) functional constraints can have almost the same convergence rate as for unconstrained problems
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- On the convergence of stochastic primal-dual hybrid gradient
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games
- Stochastic nested primal-dual method for nonconvex constrained composition optimization
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Spectral projected gradient method for stochastic optimization
- A stochastic moving ball approximation method for smooth convex constrained minimization
- An empirical quantile estimation approach for chance-constrained nonlinear optimization problems
- Primal dual methods for Wasserstein gradient flows
- Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs
This page was built for publication: Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5114401)