Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization
From MaRDI portal
Publication:6579995
Recommendations
- Frank-Wolfe style algorithms for large scale optimization
- New analysis and results for the Frank-Wolfe method
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- A novel Frank-Wolfe algorithm. Analysis and applications to large-scale SVM training
Cites work
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6253954 (Why is no real title available?)
- An extended Frank-Wolfe method with ``in-face directions, and its application to low-rank matrix completion
- Conditional gradient sliding for convex optimization
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- Stochastic conditional gradient methods: from convex minimization to submodular maximization
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- The Elements of Statistical Learning
- The landscape of empirical risk for nonconvex losses
This page was built for publication: Using Taylor-approximated gradients to improve the Frank-Wolfe method for empirical risk minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6579995)