A merit function approach to the subgradient method with averaging
From MaRDI portal
Publication:5459823
DOI10.1080/10556780701318796zbMATH Open1146.90050OpenAlexW2059050877MaRDI QIDQ5459823FDOQ5459823
Authors: Andrzej Ruszczyński
Publication date: 29 April 2008
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556780701318796
Recommendations
- Merit functions and descent algorithms for a class of variational ineqality problems
- Averaged Subgradient Methods for Constrained Convex Optimization and Nash Equilibria Computation
- Subgradient method for minimization of convex functionals and some efficiency bounds
- Merit functions for nonsmooth complementarity problems and related descent algorithms
- Globally and Superlinearly Convergent Algorithm for Minimizing a Normal Merit Function
- A method of conjugate subgradients for the minimization of functionals
- Monotone methods with averaging of subgradients and their stochastic finite-difference analogs
- scientific article; zbMATH DE number 1306987
- scientific article; zbMATH DE number 3910168
- A note on the convergence of subgradient optimization methods
Cites Work
- Title not available (Why is that?)
- Ergodic, primal convergence in dual subgradient schemes for convex programming
- Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- A dual scheme for traffic assignment problems
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Incremental subgradient methods for nondifferentiable optimization
- The volume algorithm: Producing primal solutions with a subgradient method
- Error stability properties of generalized gradient-type algorithms
- Title not available (Why is that?)
- Stochastic approximation method with gradient averaging for unconstrained problems
- Averaged Subgradient Methods for Constrained Convex Optimization and Nash Equilibria Computation
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- Ergodic convergence in subgradient optimization
- A Linearization Method for Nonsmooth Stochastic Programming Problems
- Title not available (Why is that?)
Cited In (8)
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations
- Dual subgradient method with averaging for optimal resource allocation
- Stochastic conditional gradient methods: from convex minimization to submodular maximization
- Convergence of a stochastic subgradient method with averaging for nonsmooth nonconvex constrained optimization
- Monotone methods with averaging of subgradients and their stochastic finite-difference analogs
- Subgradient algorithms on Riemannian manifolds of lower bounded curvatures
- A subgradient method based on gradient sampling for solving convex optimization problems
- Title not available (Why is that?)
This page was built for publication: A merit function approach to the subgradient method with averaging
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5459823)