Convergence Analysis of the Frank-Wolfe Algorithm and Its Generalization in Banach Spaces
From MaRDI portal
Publication:6292802
arXiv1710.07367MaRDI QIDQ6292802FDOQ6292802
Authors: Hong-Kun Xu
Publication date: 19 October 2017
Abstract: The Frank-Wolfe algorithm, a very first optimization method and also known as the conditional gradient method, was introduced by Frank and Wolfe in 1956. Due to its simple linear subproblems, the Frank-Wolfe algorithm has recently been received much attention for solving large-scale structured optimization problems arising from many applied areas such as signal processing and machine learning. In this paper we will discuss in detail the convergence analysis of the Frank-Wolfe algorithm in Banach spaces. Two ways of the selections of the stepsizes are discussed: the line minimization search method and the open loop rule. In both cases, we prove the convergence of the Frank-Wolfe algorithm in the case where the objective function has uniformly continuous (on bounded sets) Fr'echet derivative . We introduce the notion of the curvature constant of order and obtain the rate of convergence of the Frank-Wolfe algorithm. In particular, this rate reduces to if is -H"older continuous for , and to if is Lipschitz continuous. A generalized Frank-Wolfe algorithm is also introduced to address the problem of minimizing a composite objective function. Convergence of iterates of both Frank-Wolfe and generalized Frank-Wolfe algorithms are investigated.
Numerical mathematical programming methods (65K05) Convex programming (90C25) Numerical methods based on nonlinear programming (49M37)
This page was built for publication: Convergence Analysis of the Frank-Wolfe Algorithm and Its Generalization in Banach Spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6292802)