Conditional gradient algorithms with open loop step size rules

From MaRDI portal
Revision as of 09:24, 31 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1244889

DOI10.1016/0022-247X(78)90137-3zbMath0374.49017WikidataQ56763539 ScholiaQ56763539MaRDI QIDQ1244889

K. Appert

Publication date: 1978

Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)




Related Items

Decomposition techniques for bilinear saddle point problems and variational inequalities with affine monotone operatorsLinearly convergent away-step conditional gradient for non-strongly convex functionsThe Cyclic Block Conditional Gradient Method for Convex Optimization ProblemsFrank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured LearningNew results on subgradient methods for strongly convex optimization problems with a unified analysisConvergence and rate of convergence of some greedy algorithms in convex optimizationUnnamed ItemScreening for a reweighted penalized conditional gradient methodAnalysis of the Frank-Wolfe method for convex composite optimization involving a logarithmically-homogeneous barrierLarge-Scale Nonconvex Optimization: Randomization, Gap Estimation, and Numerical ResolutionSecant-inexact projection algorithms for solving a new class of constrained mixed generalized equations problemsAsymptotic linear convergence of fully-corrective generalized conditional gradient methodsRevisiting the approximate Carathéodory problem via the Frank-Wolfe algorithmBayesian Quadrature, Energy Minimization, and Space-Filling DesignDual subgradient algorithms for large-scale nonsmooth learning problemsConditional gradient algorithms for norm-regularized smooth convex optimizationFrank-Wolfe and friends: a journey into projection-free first-order optimization methodsSimplified versions of the conditional gradient methodPrimal and dual predicted decrease approximation methodsThe Alternating Descent Conditional Gradient Method for Sparse Inverse ProblemsSolving variational inequality and fixed point problems by line searches and potential optimizationRobust budget allocation via continuous submodular functionsUnnamed ItemAdaptive conditional gradient methodNew analysis and results for the Frank-Wolfe methodLow Complexity Regularization of Linear Inverse ProblemsGeneralized Conditional Gradient with Augmented Lagrangian for Composite MinimizationComplexity of linear minimization and projection on some setsScalable Robust Matrix Recovery: Frank--Wolfe Meets Proximal MethodsGeneralized Conditional Gradient for Sparse EstimationLinear convergence of accelerated conditional gradient algorithms in spaces of measuresGreedy approximation in convex optimizationPerformance analysis of greedy algorithms for minimising a maximum mean discrepancyOn the Effectiveness of Richardson Extrapolation in Data ScienceSolving variational inequalities with monotone operators on domains given by linear minimization oracles



Cites Work


This page was built for publication: Conditional gradient algorithms with open loop step size rules