Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
From MaRDI portal
Publication:2020608
DOI10.1007/s10107-020-01480-7zbMath1465.90063arXiv1807.07680OpenAlexW3009614481MaRDI QIDQ2020608
Publication date: 23 April 2021
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1807.07680
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Related Items
Analysis of the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrier, No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization, Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- CUR matrix decompositions for improved data analysis
- Subgradient methods for huge-scale optimization problems
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- On the complexity analysis of randomized block-coordinate descent methods
- Minimizing finite sums with the stochastic average gradient
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Forward-backward splitting with Bregman distances
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Exact matrix completion via convex optimization
- Conditional Gradient Sliding for Convex Optimization
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Duality Between Subgradient and Conditional Gradient Methods
- An Extended Frank--Wolfe Method with “In-Face” Directions, and Its Application to Low-Rank Matrix Completion
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Generalized Conditional Gradient for Sparse Estimation
- Katyusha: the first direct acceleration of stochastic gradient methods
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- New analysis and results for the Frank-Wolfe method