Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules
From MaRDI portal
Publication:3907009
DOI10.1137/0318035zbMATH Open0457.65048OpenAlexW2082366275MaRDI QIDQ3907009FDOQ3907009
Authors:
Publication date: 1980
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/0318035
Banach spaceconvergence ratesconditional gradient methodone-dimensional minimizationimplicit step length rules
Cited In (38)
- Bayesian quadrature, energy minimization, and space-filling design
- Newton-Goldstein convergence rates for convex constrained minimization problems with singular solutions
- Performance analysis of greedy algorithms for minimising a maximum mean discrepancy
- On convergence of binary trust-region steepest descent
- A sparse control approach to optimal sensor placement in PDE-constrained parameter estimation problems
- A Frank-Wolfe based branch-and-bound algorithm for mean-risk optimization
- Linear convergence of accelerated conditional gradient algorithms in spaces of measures
- Approximate Douglas-Rachford algorithm for two-sets convex feasibility problems
- Time-optimality by distance-optimality for parabolic control systems
- A generalized conditional gradient method and its connection to an iterative shrinkage method
- Analysis of the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrier
- Robust analysis in stochastic simulation: computation and performance guarantees
- Optimal Coatings, Bang‐Bang Controls, And Gradient Techniques
- Projection-free accelerated method for convex optimization
- Superlinear convergence of a trust region type successive linear programming method
- A Newton conditional gradient method for constrained nonlinear systems
- A framework for convex-constrained monotone nonlinear equations and its special cases
- A class of gap functions for variational inequalities
- Inexact variable metric method for convex-constrained optimization problems
- An inexact Newton-like conditional gradient method for constrained nonlinear systems
- Alternating conditional gradient method for convex feasibility problems
- New analysis and results for the Frank-Wolfe method
- Finite convergence of algorithms for nonlinear programs and variational inequalities
- Solving variational inequality and fixed point problems by line searches and potential optimization
- Complexity bounds for primal-dual methods minimizing the model of objective function
- An adaptive partial linearization method for optimization problems on product sets
- Extremal types for certain \(L^ p \)minimization problems and associated large scale nonlinear programs
- Simplified versions of the conditional gradient method
- Near-optimal coresets of kernel density estimates
- Conditional gradient sliding for convex optimization
- Gauss-Newton methods with approximate projections for solving constrained nonlinear least squares problems
- A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization
- The effect of perturbations on the convergence rates of optimization algorithms
- Gradient methods with regularization for constrained optimization problems and their complexity estimates
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Asymptotic linear convergence of fully-corrective generalized conditional gradient methods
- Secant-inexact projection algorithms for solving a new class of constrained mixed generalized equations problems
- Continuously bounds-preserving discontinuous Galerkin methods for hyperbolic conservation laws
This page was built for publication: Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3907009)