Linearly convergent away-step conditional gradient for non-strongly convex functions
From MaRDI portal
Publication:2364483
DOI10.1007/s10107-016-1069-4zbMath1370.90010arXiv1504.05002OpenAlexW2134882453MaRDI QIDQ2364483
Publication date: 21 July 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1504.05002
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items
Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence, Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, From error bounds to the complexity of first-order descent methods for convex functions, Linear convergence of first order methods for non-strongly convex optimization, Solving conic systems via projection and rescaling, Linearly-convergent FISTA variant for composite optimization with duality, Generalized self-concordant analysis of Frank-Wolfe algorithms, First-order methods for convex optimization, An easily computable upper bound on the Hoffman constant for homogeneous inequality systems, An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion, The smoothed complexity of Frank-Wolfe methods via conditioning of random matrices and polytopes, Active Set Complexity of the Away-Step Frank--Wolfe Algorithm, Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis, Frank-Wolfe and friends: a journey into projection-free first-order optimization methods, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, New characterizations of Hoffman constants for systems of linear constraints, On the von Neumann and Frank--Wolfe Algorithms with Away Steps, The condition number of a function relative to a set, Faster subgradient methods for functions with Hölderian growth, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm, Generalized Conditional Gradient for Sparse Estimation, Enhanced basic procedures for the projection and rescaling algorithm, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, First-order methods for the convex hull membership problem, Avoiding bad steps in Frank-Wolfe variants
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Conditional gradient algorithms with open loop step size rules
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion
- Some comments on Wolfe's ‘away step’
- Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm
- Foundations of Optimization
- A Tight Upper Bound on the Rate of Convergence of Frank-Wolfe Algorithm
- Convex Analysis
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization