Conditional gradient type methods for composite nonlinear and stochastic optimization
DOI10.1007/s10107-017-1225-5zbMath1410.90150arXiv1602.00961OpenAlexW2264518955MaRDI QIDQ1717236
Publication date: 7 February 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1602.00961
nonconvex optimizationiteration complexitystrongly convex optimizationweakly smooth functionsconditional gradient type methodsunified methods
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (15)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Gradient methods for minimizing composite functions
- First-order methods of smooth convex optimization with inexact oracle
- Implementation of an optimal first-order method for strongly convex total variation regularization
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- Universal gradient methods for convex optimization problems
- Introductory lectures on convex optimization. A basic course.
- Generalized uniformly optimal methods for nonlinear programming
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- Conditional Gradient Sliding for Convex Optimization
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Optimal methods of smooth convex minimization
- Some comments on Wolfe's ‘away step’
- Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules
- Conditional Gradient Algorithmsfor Rank-One Matrix Approximations with a Sparsity Constraint
- Regularization and Variable Selection Via the Elastic Net
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Conditional gradient type methods for composite nonlinear and stochastic optimization