Linearly convergent away-step conditional gradient for non-strongly convex functions

From MaRDI portal
Publication:2364483

DOI10.1007/s10107-016-1069-4zbMath1370.90010arXiv1504.05002OpenAlexW2134882453MaRDI QIDQ2364483

Amir Beck, Shimrit Shtern

Publication date: 21 July 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1504.05002



Related Items

Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence, Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, From error bounds to the complexity of first-order descent methods for convex functions, Linear convergence of first order methods for non-strongly convex optimization, Solving conic systems via projection and rescaling, Linearly-convergent FISTA variant for composite optimization with duality, Generalized self-concordant analysis of Frank-Wolfe algorithms, First-order methods for convex optimization, An easily computable upper bound on the Hoffman constant for homogeneous inequality systems, An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion, The smoothed complexity of Frank-Wolfe methods via conditioning of random matrices and polytopes, Active Set Complexity of the Away-Step Frank--Wolfe Algorithm, Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis, Frank-Wolfe and friends: a journey into projection-free first-order optimization methods, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, New characterizations of Hoffman constants for systems of linear constraints, On the von Neumann and Frank--Wolfe Algorithms with Away Steps, The condition number of a function relative to a set, Faster subgradient methods for functions with Hölderian growth, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm, Generalized Conditional Gradient for Sparse Estimation, Enhanced basic procedures for the projection and rescaling algorithm, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, First-order methods for the convex hull membership problem, Avoiding bad steps in Frank-Wolfe variants



Cites Work