Linearly convergent away-step conditional gradient for non-strongly convex functions
From MaRDI portal
Publication:2364483
Abstract: We consider the problem of minimizing a function, which is the sum of a linear function and a composition of a strongly convex function with a linear transformation, over a compact polyhedral set. Jaggi and Lacoste-Julien [14] showed that the conditional gradient method with away steps employed on the aforementioned problem without the additional linear term has linear rate of convergence, depending on the so-called pyramidal width of the feasible set. We revisit this result and provide a variant of the algorithm and an analysis that is based on simple duality arguments, as well as corresponding error bounds. This new analysis (a) enables the incorporation of the additional linear term, (b) does not require a linear-oracle that outputs an extreme point of the linear mapping of the feasible set and (c) depends on a new constant, termed "the vertex-facet distance constant", which is explicitly expressed in terms of the problem's parameters and the geometry of the feasible set. This constant replaces the pyramidal width, which is difficult to evaluate.
Recommendations
- Conditions for linear convergence of the gradient method for non-convex optimization
- scientific article; zbMATH DE number 4141808
- scientific article; zbMATH DE number 880246
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- An extension of the conditional gradient method to a class of nonconvex optimization problems
- Linear convergence of accelerated conditional gradient algorithms in spaces of measures
- scientific article; zbMATH DE number 4057300
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
Cites work
- scientific article; zbMATH DE number 6378119 (Why is no real title available?)
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 3526459 (Why is no real title available?)
- scientific article; zbMATH DE number 3293978 (Why is no real title available?)
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- A Tight Upper Bound on the Rate of Convergence of Frank-Wolfe Algorithm
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- An extended Frank-Wolfe method with ``in-face directions, and its application to low-rank matrix completion
- Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system
- Conditional gradient algorithms with open loop step size rules
- Convex Analysis
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Foundations of Optimization
- Introductory lectures on convex optimization. A basic course.
- Polytope conditioning and linear convergence of the Frank-Wolfe algorithm
- Some comments on Wolfe's ‘away step’
Cited in
(32)- First-order methods for the convex hull membership problem
- Avoiding bad steps in Frank-Wolfe variants
- An away-step Frank-Wolfe algorithm for constrained multiobjective optimization
- Generalized conditional gradient for sparse estimation
- First-order methods for convex optimization
- Conditional gradient method without line-search
- From error bounds to the complexity of first-order descent methods for convex functions
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Polytope conditioning and linear convergence of the Frank-Wolfe algorithm
- New characterizations of Hoffman constants for systems of linear constraints
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- An easily computable upper bound on the Hoffman constant for homogeneous inequality systems
- Linear convergence of first order methods for non-strongly convex optimization
- Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning
- The condition number of a function relative to a set
- Kissing polytopes
- Enhanced basic procedures for the projection and rescaling algorithm
- PolyCD: optimization via cycling through the vertices of a polytope
- An extended Frank-Wolfe method with ``in-face directions, and its application to low-rank matrix completion
- Solving conic systems via projection and rescaling
- Linearly-convergent FISTA variant for composite optimization with duality
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- Active set complexity of the away-step Frank-Wolfe algorithm
- The smoothed complexity of Frank-Wolfe methods via conditioning of random matrices and polytopes
- Faster subgradient methods for functions with Hölderian growth
- scientific article; zbMATH DE number 4057300 (Why is no real title available?)
- Frank-Wolfe and friends: a journey into projection-free first-order optimization methods
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- On the von Neumann and Frank-Wolfe algorithms with away steps
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
This page was built for publication: Linearly convergent away-step conditional gradient for non-strongly convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2364483)