Convergence Rates for Conditional Gradient Sequences Generated by Implicit Step Length Rules
From MaRDI portal
Publication:3907009
DOI10.1137/0318035zbMath0457.65048OpenAlexW2082366275MaRDI QIDQ3907009
No author found.
Publication date: 1980
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/0318035
convergence ratesBanach spaceconditional gradient methodone-dimensional minimizationimplicit step length rules
Related Items
Inexact variable metric method for convex-constrained optimization problems, A class of gap functions for variational inequalities, Projection-free accelerated method for convex optimization, A generalized conditional gradient method and its connection to an iterative shrinkage method, Superlinear convergence of a trust region type successive linear programming method, An inexact Newton-like conditional gradient method for constrained nonlinear systems, Analysis of the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrier, An adaptive partial linearization method for optimization problems on product sets, A framework for convex-constrained monotone nonlinear equations and its special cases, Near-optimal coresets of kernel density estimates, Secant-inexact projection algorithms for solving a new class of constrained mixed generalized equations problems, Asymptotic linear convergence of fully-corrective generalized conditional gradient methods, Time-optimality by distance-optimality for parabolic control systems, Approximate Douglas-Rachford algorithm for two-sets convex feasibility problems, A Frank-Wolfe based branch-and-bound algorithm for mean-risk optimization, Bayesian Quadrature, Energy Minimization, and Space-Filling Design, Conditional gradient type methods for composite nonlinear and stochastic optimization, Gradient methods with regularization for constrained optimization problems and their complexity estimates, Simplified versions of the conditional gradient method, Solving variational inequality and fixed point problems by line searches and potential optimization, A Linearly Convergent Variant of the Conditional Gradient Algorithm under Strong Convexity, with Applications to Online and Stochastic Optimization, Newton-Goldstein convergence rates for convex constrained minimization problems with singular solutions, Complexity bounds for primal-dual methods minimizing the model of objective function, A Newton conditional gradient method for constrained nonlinear systems, New analysis and results for the Frank-Wolfe method, Alternating conditional gradient method for convex feasibility problems, Conditional Gradient Sliding for Convex Optimization, Robust Analysis in Stochastic Simulation: Computation and Performance Guarantees, Gauss-Newton methods with approximate projections for solving constrained nonlinear least squares problems, Optimal Coatings, Bang‐Bang Controls, And Gradient Techniques, Linear convergence of accelerated conditional gradient algorithms in spaces of measures, The effect of perturbations on the convergence rates of optimization algorithms, A sparse control approach to optimal sensor placement in PDE-constrained parameter estimation problems, Extremal types for certain \(L^ p \)minimization problems and associated large scale nonlinear programs, Performance analysis of greedy algorithms for minimising a maximum mean discrepancy, Finite convergence of algorithms for nonlinear programs and variational inequalities