An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems

From MaRDI portal
Publication:3465236


DOI10.1137/14096757XzbMath1329.90179MaRDI QIDQ3465236

Renato D. C. Monteiro, Yunlong He

Publication date: 21 January 2016

Published in: SIAM Journal on Optimization (Search for Journal in Brave)


65K05: Numerical mathematical programming methods

90C25: Convex programming

90C60: Abstract computational complexity for mathematical programming problems

90C30: Nonlinear programming

47J20: Variational and other types of inequalities involving nonlinear operators (general)

65K10: Numerical optimization and variational techniques

47H05: Monotone operators and generalizations


Related Items

A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization, Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates, Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming, A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems, Projection-free accelerated method for convex optimization, New Primal-Dual Algorithms for a Class of Nonsmooth and Nonlinear Convex-Concave Minimax Problems, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, On the convergence rate of the scaled proximal decomposition on the graph of a maximal monotone operator (SPDG) algorithm, An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems, On the iteration-complexity of a non-Euclidean hybrid proximal extragradient framework and of a proximal ADMM, Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs, Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method, Iteration Complexity of an Inner Accelerated Inexact Proximal Augmented Lagrangian Method Based on the Classical Lagrangian Function, An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems, A proximal neurodynamic model for solving inverse mixed variational inequalities, A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems, A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games, Complexity of the relaxed Peaceman-Rachford splitting method for the sum of two maximal strongly monotone operators, Pointwise and ergodic convergence rates of a variable metric proximal alternating direction method of multipliers, An inexact Spingarn's partial inverse method with applications to operator splitting and composite optimization, Point process estimation with Mirror Prox algorithms, Acceleration of primal-dual methods by preconditioning and simple subproblem procedures, A FISTA-type accelerated gradient algorithm for solving smooth nonconvex composite optimization problems, Accelerated gradient sliding for structured convex optimization, Accelerated inexact composite gradient methods for nonconvex spectral optimization problems, An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems, On inexact relative-error hybrid proximal extragradient, forward-backward and Tseng's modified forward-backward methods with inertial effects, Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems, Primal-dual proximal splitting and generalized conjugation in non-smooth non-convex optimization, Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng's F-B four-operator splitting method for solving monotone inclusions, Fast bundle-level methods for unconstrained and ball-constrained convex optimization, Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach, Improved Pointwise Iteration-Complexity of A Regularized ADMM and of a Regularized Non-Euclidean HPE Framework



Cites Work