Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
From MaRDI portal
Publication:4652003
DOI10.1137/S1052623403425629zbMATH Open1106.90059DBLPjournals/siamjo/Nemirovski04WikidataQ57392926 ScholiaQ57392926MaRDI QIDQ4652003FDOQ4652003
Publication date: 23 February 2005
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- On the convergence rate of a class of proximal-based decomposition methods for monotone variational inequalities
- Pseudomonotone variational inequalities: Convergence of proximal methods
- scientific article; zbMATH DE number 2109090
- Proximal-like contraction methods for monotone variational inequalities in a unified framework. II: General methods and numerical experiments
- scientific article; zbMATH DE number 2084889
- The proximal point method for nonmonotone variational inequalities
- scientific article; zbMATH DE number 6179220
- scientific article; zbMATH DE number 2208618
- Convergence of the proximal point algorithm to approximate solutions of variational inequalities
Cited In (only showing first 100 items - show all)
- The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods
- Convergence of the method of extrapolation from the past for variational inequalities in uniformly convex Banach spaces
- Learning in nonatomic games. I: Finite action spaces and population games
- An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants
- Inexact model: a framework for optimization and variational inequalities
- Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems
- An adaptive two-stage proximal algorithm for equilibrium problems in Hadamard spaces
- Communication-efficient algorithms for decentralized and stochastic optimization
- Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms
- Image restoration based on the minimized surface regularization
- A double extrapolation primal-dual algorithm for saddle point problems
- Adaptive two-stage Bregman method for variational inequalities
- Adaptive extraproximal algorithm for the equilibrium problem in Hadamard spaces
- Title not available (Why is that?)
- Nonsymmetric proximal point algorithm with moving proximal centers for variational inequalities: convergence analysis
- Local saddle points for unconstrained polynomial optimization
- Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems
- An Inverse-Adjusted Best Response Algorithm for Nash Equilibria
- Saddle points of rational functions
- A cyclic block coordinate descent method with generalized gradient projections
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Solving Large-Scale Optimization Problems with a Convergence Rate Independent of Grid Size
- On the iteration complexity of some projection methods for monotone linear variational inequalities
- An optimal randomized incremental gradient method
- Incremental Constraint Projection Methods for Monotone Stochastic Variational Inequalities
- Scalable Semidefinite Programming
- Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
- An \(O(s^r)\)-resolution ODE framework for understanding discrete-time algorithms and applications to the linear convergence of minimax problems
- On lower iteration complexity bounds for the convex concave saddle point problems
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- A golden ratio primal-dual algorithm for structured convex optimization
- A simplified view of first order methods for optimization
- On the convergence rate of a class of proximal-based decomposition methods for monotone variational inequalities
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Golden ratio algorithms for variational inequalities
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- On iteration complexity of a first-order primal-dual method for nonlinear convex cone programming
- Customized alternating direction methods of multipliers for generalized multi-facility Weber problem
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Bregman extragradient method with monotone rule of step adjustment
- A Novel Algorithm with Self-adaptive Technique for Solving Variational Inequalities in Banach Spaces
- Infinite-dimensional gradient-based descent for alpha-divergence minimisation
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- Bounded perturbation resilience of extragradient-type methods and their applications
- An efficient primal dual prox method for non-smooth optimization
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Accelerated schemes for a class of variational inequalities
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- A Level-Set Method for Convex Optimization with a Feasible Solution Path
- On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
- Iteration complexity of generalized complementarity problems
- Weak and strong convergence Bregman extragradient schemes for solving pseudo-monotone and non-Lipschitz variational inequalities
- Online First-Order Framework for Robust Convex Optimization
- Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems
- Accelerated methods for saddle-point problem
- An alternating direction method of multipliers with a worst-case $O(1/n^2)$ convergence rate
- Convergence of two-stage method with Bregman divergence for solving variational inequalities
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- Title not available (Why is that?)
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Convergence of the operator extrapolation method for variational inequalities in Banach spaces
- On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes
- On the efficiency of a randomized mirror descent algorithm in online optimization problems
- Unifying mirror descent and dual averaging
- Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems
- Accelerated gradient sliding for structured convex optimization
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- The saddle point problem of polynomials
- PPA-like contraction methods for convex optimization: a framework using variational inequality approach
- A semi-definite programming approach for robust tracking
- Solving variational inequalities with monotone operators on domains given by linear minimization oracles
- An implementable proximal point algorithmic framework for nuclear norm minimization
- Inexact alternating-direction-based contraction methods for separable linearly constrained convex optimization
- Korpelevich's method for variational inequality problems in Banach spaces
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- An optimal method for stochastic composite optimization
- On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Accelerated linearized Bregman method
- First-order methods for convex optimization
- Self-concordant barriers for convex approximations of structured convex sets
- Subgradient methods for saddle-point problems
- On non-ergodic convergence rate of the operator splitting method for a class of variational inequalities
- Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- On the Convergence of Mirror Descent beyond Stochastic Convex Programming
- Regularized HPE-Type Methods for Solving Monotone Inclusions with Improved Pointwise Iteration-Complexity Bounds
- The generalized proximal point algorithm with step size 2 is not necessarily convergent
- On the linear convergence of the general first order primal-dual algorithm
- Inexact first-order primal-dual algorithms
- A primal-dual prediction-correction algorithm for saddle point optimization
- Sublinear time algorithms for approximate semidefinite programming
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- Iteration-complexity of first-order penalty methods for convex programming
This page was built for publication: Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4652003)