One-sided Frank-Wolfe algorithms for saddle problems

From MaRDI portal
Publication:6359372

arXiv2101.12617MaRDI QIDQ6359372FDOQ6359372


Authors: Vladimir Kolmogorov, Thomas Pock Edit this on Wikidata


Publication date: 29 January 2021

Abstract: We study a class of convex-concave saddle-point problems of the form minxmaxylangleKx,yangle+fcalP(x)hast(y) where K is a linear operator, fcalP is the sum of a convex function f with a Lipschitz-continuous gradient and the indicator function of a bounded convex polytope calP, and hast is a convex (possibly nonsmooth) function. Such problem arises, for example, as a Lagrangian relaxation of various discrete optimization problems. Our main assumptions are the existence of an efficient linear minimization oracle (lmo) for fcalP and an efficient proximal map for h* which motivate the solution via a blend of proximal primal-dual algorithms and Frank-Wolfe algorithms. In case h* is the indicator function of a linear constraint and function f is quadratic, we show a O(1/n2) convergence rate on the dual objective, requiring O(nlogn) calls of lmo. If the problem comes from the constrained optimization problem minxinmathbbRdfcalP(x):|:Axb=0 then we additionally get bound O(1/n2) both on the primal gap and on the infeasibility gap. In the most general case, we show a O(1/n) convergence rate of the primal-dual gap again requiring O(nlogn) calls of lmo. To the best of our knowledge, this improves on the known convergence rates for the considered class of saddle-point problems. We show applications to labeling problems frequently appearing in machine learning and computer vision.













This page was built for publication: One-sided Frank-Wolfe algorithms for saddle problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6359372)