A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization

From MaRDI portal
Publication:5741072

DOI10.1137/140985366zbMath1342.65142arXiv1301.4666OpenAlexW2481906283MaRDI QIDQ5741072

Elad Hazan, Dan Garber

Publication date: 21 July 2016

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1301.4666



Related Items

Linearly convergent away-step conditional gradient for non-strongly convex functions, A Newton Frank-Wolfe method for constrained self-concordant minimization, Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, Unnamed Item, Unnamed Item, A unified analysis of stochastic gradient‐free Frank–Wolfe methods, The Frank-Wolfe algorithm: a short introduction, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Generalized self-concordant analysis of Frank-Wolfe algorithms, First-order methods for convex optimization, Conditional gradient type methods for composite nonlinear and stochastic optimization, Improved complexities of conditional gradient-type methods with applications to robust matrix recovery problems, Frank-Wolfe and friends: a journey into projection-free first-order optimization methods, A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training, Unnamed Item, Unnamed Item, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, On the von Neumann and Frank--Wolfe Algorithms with Away Steps, Complexity bounds for primal-dual methods minimizing the model of objective function, A Polynomial-Time Descent Method for Separable Convex Optimization Problems with Linear Constraints, Unnamed Item, Conditional Gradient Sliding for Convex Optimization, Polytope Conditioning and Linear Convergence of the Frank–Wolfe Algorithm, Unnamed Item, Inverse reinforcement learning in contextual MDPs, Generalized Conditional Gradient for Sparse Estimation, The discrete yet ubiquitous theorems of Carathéodory, Helly, Sperner, Tucker, and Tverberg, Restarting Frank-Wolfe: faster rates under Hölderian error bounds



Cites Work