A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
From MaRDI portal
Publication:5741072
DOI10.1137/140985366zbMath1342.65142arXiv1301.4666OpenAlexW2481906283MaRDI QIDQ5741072
Publication date: 21 July 2016
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1301.4666
linear programmingstochastic optimizationonline learningFrank-Wolfe algorithmfirst-order methodsonline convex optimizationconditional gradient methods
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Linear programming (90C05) Stochastic programming (90C15) Combinatorial optimization (90C27)
Related Items
Linearly convergent away-step conditional gradient for non-strongly convex functions, A Newton Frank-Wolfe method for constrained self-concordant minimization, Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, Unnamed Item, Unnamed Item, A unified analysis of stochastic gradient‐free Frank–Wolfe methods, The Frank-Wolfe algorithm: a short introduction, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Generalized self-concordant analysis of Frank-Wolfe algorithms, First-order methods for convex optimization, Conditional gradient type methods for composite nonlinear and stochastic optimization, Improved complexities of conditional gradient-type methods with applications to robust matrix recovery problems, Frank-Wolfe and friends: a journey into projection-free first-order optimization methods, A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training, Unnamed Item, Unnamed Item, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, On the von Neumann and Frank--Wolfe Algorithms with Away Steps, Complexity bounds for primal-dual methods minimizing the model of objective function, A Polynomial-Time Descent Method for Separable Convex Optimization Problems with Linear Constraints, Unnamed Item, Conditional Gradient Sliding for Convex Optimization, Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm, Unnamed Item, Inverse reinforcement learning in contextual MDPs, Generalized Conditional Gradient for Sparse Estimation, The discrete yet ubiquitous theorems of Carathéodory, Helly, Sperner, Tucker, and Tverberg, Restarting Frank-Wolfe: faster rates under Hölderian error bounds
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- A regularization of the Frank-Wolfe method and unification of certain nonlinear programming methods
- A conditional gradient method with linear rate of convergence for solving convex linear systems
- Combinatorial optimization. Polyhedra and efficiency (3 volumes)
- Linearly convergent away-step conditional gradient for non-strongly convex functions
- Efficient algorithms for online decision problems
- Online Learning and Online Convex Optimization
- On the Generalization Ability of On-Line Learning Algorithms
- Some comments on Wolfe's ‘away step’
- Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules
- Logarithmic Regret Algorithms for Online Convex Optimization
- Sparse Approximate Solutions to Semidefinite Programs
- Linear convergence of a modified Frank–Wolfe algorithm for computing minimum-volume enclosing ellipsoids
- Prediction, Learning, and Games
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization