One-sided Frank-Wolfe algorithms for saddle problems
From MaRDI portal
Publication:6359372
arXiv2101.12617MaRDI QIDQ6359372FDOQ6359372
Authors: Vladimir Kolmogorov, Thomas Pock
Publication date: 29 January 2021
Abstract: We study a class of convex-concave saddle-point problems of the form where is a linear operator, is the sum of a convex function with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope , and is a convex (possibly nonsmooth) function. Such problem arises, for example, as a Lagrangian relaxation of various discrete optimization problems. Our main assumptions are the existence of an efficient linear minimization oracle () for and an efficient proximal map for which motivate the solution via a blend of proximal primal-dual algorithms and Frank-Wolfe algorithms. In case is the indicator function of a linear constraint and function is quadratic, we show a convergence rate on the dual objective, requiring calls of . If the problem comes from the constrained optimization problem then we additionally get bound both on the primal gap and on the infeasibility gap. In the most general case, we show a convergence rate of the primal-dual gap again requiring calls of . To the best of our knowledge, this improves on the known convergence rates for the considered class of saddle-point problems. We show applications to labeling problems frequently appearing in machine learning and computer vision.
This page was built for publication: One-sided Frank-Wolfe algorithms for saddle problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6359372)