Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
DOI10.1007/S10957-022-02058-3zbMATH Open1545.65268MaRDI QIDQ6596341FDOQ6596341
Authors: Guoyong Gu, Junfeng Yang
Publication date: 2 September 2024
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
variational inequalityperformance estimationsublinear convergence raterelaxed proximal point algorithmtight complexity bound
Convex programming (90C25) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33) Numerical methods for variational inequalities and related problems (65K15)
Cites Work
- Using SeDuMi 1.02, A Matlab toolbox for optimization over symmetric cones
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- Primal-dual subgradient methods for convex problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Monotone Operators and the Proximal Point Algorithm
- Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers
- Proximité et dualité dans un espace hilbertien
- Title not available (Why is that?)
- On the Convergence of the Proximal Point Algorithm for Convex Minimization
- Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach
- Convex analysis and monotone operator theory in Hilbert spaces
- Performance of first-order methods for smooth convex minimization: a novel approach
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- The proximal point algorithm with genuine superlinear convergence for the monotone complementarity problem
- An optimal variant of Kelley's cutting-plane method
- Exact worst-case performance of first-order methods for composite convex optimization
- Quadratic Matrix Programming
- New Proximal Point Algorithms for Convex Minimization
- On the convergence analysis of the optimized gradient method
- On the convergence rate of the Halpern-iteration
- Generalizing the optimized gradient method for smooth convex minimization
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Operator splitting performance estimation: tight contraction factors and optimal parameter selection
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Accelerated proximal point method for maximally monotone operators
- Tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems
Cited In (1)
This page was built for publication: Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6596341)