Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization
DOI10.1137/22M1519286MaRDI QIDQ6579995FDOQ6579995
Authors: Zikai Xiong, Robert M. Freund
Publication date: 29 July 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Recommendations
- Frank-Wolfe style algorithms for large scale optimization
- New analysis and results for the Frank-Wolfe method
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training
computational complexityconvex optimizationlinear predictionempirical risk minimizationFrank-Wolfelinear minimization oracle
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- The Elements of Statistical Learning
- Conditional gradient sliding for convex optimization
- Stochastic conditional gradient methods: from convex minimization to submodular maximization
- The landscape of empirical risk for nonconvex losses
- An extended Frank-Wolfe method with ``in-face directions, and its application to low-rank matrix completion
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
This page was built for publication: Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6579995)