Subgradient methods for huge-scale optimization problems
From MaRDI portal
Publication:403646
DOI10.1007/s10107-013-0686-4zbMath1297.90120OpenAlexW2129470876MaRDI QIDQ403646
Publication date: 29 August 2014
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: http://uclouvain.be/cps/ucl/doc/core/documents/coredp2012_2web.pdf
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Minimax problems in mathematical programming (90C47)
Related Items
On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions, Efficient numerical methods to solve sparse linear equations with application to PageRank, Subgradient method with feasible inexact projections for constrained convex optimization problems, An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, Adaptive subgradient methods for mathematical programming problems with quasiconvex functions, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Infeasibility Detection with Primal-Dual Hybrid Gradient for Large-Scale Linear Programming, A subgradient method with non-monotone line search, Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming, Nesterov's smoothing and excessive gap methods for an optimization problem in VLSI placement, A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions, Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization, An acceleration procedure for optimal first-order methods, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Unnamed Item, On the properties of the method of minimization for convex functions with relaxation on the distance to extremum, Numerical study of high-dimensional optimization problems using a modification of Polyak's method, On solving the densestk-subgraph problem on large graphs, Control analysis and design via randomised coordinate polynomial minimisation, On the efficiency of a randomized mirror descent algorithm in online optimization problems, Parallel coordinate descent methods for big data optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Characterizations of linear suboptimality for mathematical programs with equilibrium constraints
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization