The rate of convergence of Bregman proximal methods: local geometry versus regularity versus sharpness
From MaRDI portal
Publication:6573018
Recommendations
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- On the linear convergence of a Bregman proximal point algorithm
- On the convergence rate of entropic proximal optimization methods
- On Dual Convergence and the Rate of Primal Convergence of Bregman’s Convex Programming Method
- Dual convergence of the proximal point method with Bregman distances for linear programming
Cites work
- scientific article; zbMATH DE number 1667417 (Why is no real title available?)
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 5957285 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
- A modification of Karmarkar's linear programming algorithm
- A modification of the Arrow-Hurwicz method for search of saddle points
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Convex Analysis
- Convex optimization: algorithms and complexity
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Game theory
- Hessian Riemannian Gradient Flows in Convex Programming
- Interior projection-like methods for monotone variational inequalities
- Introductory lectures on convex optimization. A basic course.
- Learning in games via reinforcement and regularization
- Learning in games with continuous action sets and unknown payoff functions
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Online learning and online convex optimization
- Possible generalization of Boltzmann-Gibbs statistics.
- Primal-dual subgradient methods for convex problems
- Projected reflected gradient methods for monotone variational inequalities
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Proximal Minimization Methods with Generalized Bregman Functions
- Regularization techniques for learning with matrices
- Relatively smooth convex optimization by first-order methods, and applications
- Robust Stochastic Approximation Approach to Stochastic Programming
- Solving variational inequalities with stochastic mirror-prox algorithm
- Stochastic games
- The Nonstochastic Multiarmed Bandit Problem
This page was built for publication: The rate of convergence of Bregman proximal methods: local geometry versus regularity versus sharpness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6573018)