Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
From MaRDI portal
Publication:4652003
DOI10.1137/S1052623403425629zbMATH Open1106.90059DBLPjournals/siamjo/Nemirovski04WikidataQ57392926 ScholiaQ57392926MaRDI QIDQ4652003FDOQ4652003
Authors: Arkadi Nemirovski
Publication date: 23 February 2005
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- On the convergence rate of a class of proximal-based decomposition methods for monotone variational inequalities
- Pseudomonotone variational inequalities: Convergence of proximal methods
- scientific article; zbMATH DE number 2109090
- Proximal-like contraction methods for monotone variational inequalities in a unified framework. II: General methods and numerical experiments
- scientific article; zbMATH DE number 2084889
- The proximal point method for nonmonotone variational inequalities
- scientific article; zbMATH DE number 6179220
- scientific article; zbMATH DE number 2208618
- Convergence of the proximal point algorithm to approximate solutions of variational inequalities
Cited In (only showing first 100 items - show all)
- The saddle point problem of polynomials
- PPA-like contraction methods for convex optimization: a framework using variational inequality approach
- A semi-definite programming approach for robust tracking
- Solving variational inequalities with monotone operators on domains given by linear minimization oracles
- An implementable proximal point algorithmic framework for nuclear norm minimization
- Inexact alternating-direction-based contraction methods for separable linearly constrained convex optimization
- Korpelevich's method for variational inequality problems in Banach spaces
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- An optimal method for stochastic composite optimization
- On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- Solving variational inequalities with stochastic mirror-prox algorithm
- Accelerated linearized Bregman method
- First-order methods for convex optimization
- Self-concordant barriers for convex approximations of structured convex sets
- Subgradient methods for saddle-point problems
- On non-ergodic convergence rate of the operator splitting method for a class of variational inequalities
- Dual extrapolation and its applications to solving variational inequalities and related problems
- A proximal strictly contractive Peaceman-Rachford splitting method for convex programming with applications to imaging
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- Regularized HPE-Type Methods for Solving Monotone Inclusions with Improved Pointwise Iteration-Complexity Bounds
- The generalized proximal point algorithm with step size 2 is not necessarily convergent
- On the linear convergence of the general first order primal-dual algorithm
- Inexact first-order primal-dual algorithms
- A primal-dual prediction-correction algorithm for saddle point optimization
- Sublinear time algorithms for approximate semidefinite programming
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- On the \(O(1/t)\) convergence rate of the projection and contraction methods for variational inequalities with Lipschitz continuous monotone operators
- Iteration-complexity of first-order penalty methods for convex programming
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Efficient first-order methods for convex minimization: a constructive approach
- Level-set methods for convex optimization
- On the convergence of mirror descent beyond stochastic convex programming
- On the optimal linear convergence rate of a generalized proximal point algorithm
- On the convergence rate of Douglas-Rachford operator splitting method
- A hybrid proximal extragradient self-concordant primal barrier method for monotone variational inequalities
- Accelerating block-decomposition first-order methods for solving composite saddle-point and two-player Nash equilibrium problems
- An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems
- Randomized first order algorithms with applications to \(\ell _{1}\)-minimization
- Discussion on: ``Why is resorting to fate wise? A critical look at randomized algorithms in systems and control
- Barrier subgradient method
- Title not available (Why is that?)
- An alternating extragradient method with non Euclidean projections for saddle point problems
- Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming
- On the resolution of misspecified convex optimization and monotone variational inequality problems
- A simple algorithm for a class of nonsmooth convex-concave saddle-point problems
- Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization
- An \(\mathcal O(1/{k})\) convergence rate for the variable stepsize Bregman operator splitting algorithm
- Primal-dual subgradient methods for convex problems
- An introduction to continuous optimization for imaging
- Proximal extrapolated gradient methods for variational inequalities
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems
- Estimation of high-dimensional low-rank matrices
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- Sparse non Gaussian component analysis by semidefinite programming
- Iterative methods for the elastography inverse problem of locating tumors
- Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization
- A majorized ADMM with indefinite proximal terms for linearly constrained convex composite optimization
- An improved first-order primal-dual algorithm with a new correction step
- Sparse learning for large-scale and high-dimensional data: a randomized convex-concave optimization approach
- A version of the mirror descent method to solve variational inequalities
- An extragradient-based alternating direction method for convex minimization
- Large-scale semidefinite programming via a saddle point mirror-prox algorithm
- Multicommodity network flows: A survey. II: Solution methods
- New version of mirror prox for variational inequalities with adaptation to inexactness
- A novel algorithm with self-adaptive technique for solving variational inequalities in Banach spaces
- Convergence of the method of extrapolation from the past for variational inequalities in uniformly convex Banach spaces
- Learning in nonatomic games. I: Finite action spaces and population games
- An Accelerated Inexact Proximal Point Method for Solving Nonconvex-Concave Min-Max Problems
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants
- Inexact model: a framework for optimization and variational inequalities
- Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems
- An adaptive two-stage proximal algorithm for equilibrium problems in Hadamard spaces
- Communication-efficient algorithms for decentralized and stochastic optimization
- Self-concordant inclusions: a unified framework for path-following generalized Newton-type algorithms
- Online first-order framework for robust convex optimization
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Image restoration based on the minimized surface regularization
- Second-order stochastic optimization for machine learning in linear time
- A level-set method for convex optimization with a feasible solution path
- A double extrapolation primal-dual algorithm for saddle point problems
- Adaptive two-stage Bregman method for variational inequalities
- Adaptive extraproximal algorithm for the equilibrium problem in Hadamard spaces
- Title not available (Why is that?)
- Nonsymmetric proximal point algorithm with moving proximal centers for variational inequalities: convergence analysis
- Local saddle points for unconstrained polynomial optimization
- Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems
- An alternating direction method of multipliers with a worst-case \(O(1/n^2)\) convergence rate
- Saddle points of rational functions
- A primal-dual algorithm with line search for general convex-concave saddle point problems
- A cyclic block coordinate descent method with generalized gradient projections
- Decomposition techniques for bilinear saddle point problems and variational inequalities with affine monotone operators
- Dynamic stochastic approximation for multi-stage stochastic optimization
- On the iteration complexity of some projection methods for monotone linear variational inequalities
- An optimal randomized incremental gradient method
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
This page was built for publication: Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4652003)