Linearly convergent away-step conditional gradient for non-strongly convex functions
DOI10.1007/S10107-016-1069-4zbMATH Open1370.90010arXiv1504.05002OpenAlexW2134882453MaRDI QIDQ2364483FDOQ2364483
Publication date: 21 July 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1504.05002
Recommendations
- Conditions for linear convergence of the gradient method for non-convex optimization
- scientific article
- scientific article; zbMATH DE number 880246
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- An extension of the conditional gradient method to a class of nonconvex optimization problems
- Linear convergence of accelerated conditional gradient algorithms in spaces of measures
- scientific article; zbMATH DE number 4057300
- On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
Numerical optimization and variational techniques (65K10) Convex programming (90C25) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Convex Analysis
- Title not available (Why is that?)
- Conditional gradient algorithms with open loop step size rules
- Title not available (Why is that?)
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Foundations of Optimization
- Condition number complexity of an elementary algorithm for computing a reliable solution of a conic linear system
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- Some comments on Wolfe's ‘away step’
- Title not available (Why is that?)
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Title not available (Why is that?)
- A Tight Upper Bound on the Rate of Convergence of Frank-Wolfe Algorithm
- An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion
- Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm
Cited In (32)
- First-order methods for the convex hull membership problem
- An away-step Frank-Wolfe algorithm for constrained multiobjective optimization
- Avoiding bad steps in Frank-Wolfe variants
- First-order methods for convex optimization
- Conditional gradient method without line-search
- Efficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergence
- From error bounds to the complexity of first-order descent methods for convex functions
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- New characterizations of Hoffman constants for systems of linear constraints
- An easily computable upper bound on the Hoffman constant for homogeneous inequality systems
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Linear convergence of first order methods for non-strongly convex optimization
- Active Set Complexity of the Away-Step Frank--Wolfe Algorithm
- Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning
- The condition number of a function relative to a set
- Kissing polytopes
- Enhanced basic procedures for the projection and rescaling algorithm
- PolyCD: optimization via cycling through the vertices of a polytope
- An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion
- Generalized Conditional Gradient for Sparse Estimation
- Solving conic systems via projection and rescaling
- Linearly-convergent FISTA variant for composite optimization with duality
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- The smoothed complexity of Frank-Wolfe methods via conditioning of random matrices and polytopes
- Title not available (Why is that?)
- Faster subgradient methods for functions with Hölderian growth
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Frank-Wolfe and friends: a journey into projection-free first-order optimization methods
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm
- On the von Neumann and Frank-Wolfe algorithms with away steps
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
This page was built for publication: Linearly convergent away-step conditional gradient for non-strongly convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2364483)