Convergence Analysis of the Frank-Wolfe Algorithm and Its Generalization in Banach Spaces

From MaRDI portal
Publication:6292802

arXiv1710.07367MaRDI QIDQ6292802FDOQ6292802


Authors: Hong-Kun Xu Edit this on Wikidata


Publication date: 19 October 2017

Abstract: The Frank-Wolfe algorithm, a very first optimization method and also known as the conditional gradient method, was introduced by Frank and Wolfe in 1956. Due to its simple linear subproblems, the Frank-Wolfe algorithm has recently been received much attention for solving large-scale structured optimization problems arising from many applied areas such as signal processing and machine learning. In this paper we will discuss in detail the convergence analysis of the Frank-Wolfe algorithm in Banach spaces. Two ways of the selections of the stepsizes are discussed: the line minimization search method and the open loop rule. In both cases, we prove the convergence of the Frank-Wolfe algorithm in the case where the objective function f has uniformly continuous (on bounded sets) Fr'echet derivative f. We introduce the notion of the curvature constant of order sigmain(1,2] and obtain the rate O(frac1ksigma1) of convergence of the Frank-Wolfe algorithm. In particular, this rate reduces to O(frac1ku) if f is u-H"older continuous for uin(0,1], and to O(frac1k) if f is Lipschitz continuous. A generalized Frank-Wolfe algorithm is also introduced to address the problem of minimizing a composite objective function. Convergence of iterates of both Frank-Wolfe and generalized Frank-Wolfe algorithms are investigated.













This page was built for publication: Convergence Analysis of the Frank-Wolfe Algorithm and Its Generalization in Banach Spaces

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6292802)