Weak subgradient method for solving nonsmooth nonconvex optimization problems
From MaRDI portal
Publication:5009157
DOI10.1080/02331934.2020.1745205zbMath1476.90254OpenAlexW3013442555MaRDI QIDQ5009157
Refail Kasimbeyli, Gulcin Dinc Yalcin
Publication date: 19 August 2021
Published in: Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/02331934.2020.1745205
Related Items (2)
Generalized derivatives and optimality conditions in nonconvex optimization ⋮ Optimality conditions for nonsmooth fuzzy optimization models under the gH-weak subdifferentiability
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Codifferential method for minimizing nonsmooth DC functions
- Optimality conditions in nonconvex optimization via weak subdifferentials
- Globally convergent limited memory bundle method for large-scale nonsmooth optimization
- Nonsmooth optimization through mesh adaptive direct search and variable neighborhood search
- Discrete gradient method: Derivative-free method for nonsmooth optimization
- On the choice of step size in subgradient optimization
- Error stability properties of generalized gradient-type algorithms
- A bundle-Newton method for nonsmooth unconstrained minimization
- A trust region algorithm for minimization of locally Lipschitzian functions
- Stability and duality of nonconvex problems via augmented Lagrangian
- On the convergence of conditional \(\varepsilon\)-subgradient methods for convex programs and convex-concave saddle-point problems.
- On augmented Lagrangians for optimization problems with a single constraint
- Augmented Lagrangian duality and nondifferentiable optimization methods in nonconvex programming
- A unified approach to global convergence of trust region methods for nonsmooth optimization
- The effect of deterministic noise in subgradient methods
- Methods of descent for nondifferentiable optimization
- Superlinearly convergent algorithm for min-max problems
- A new nonsmooth trust region algorithm for locally Lipschitz unconstrained optimization problems
- A conic scalarization method in multi-objective optimization
- An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization
- On a modified subgradient algorithm for dual problems via sharp augmented Lagrangian
- Inexact subgradient methods for quasi-convex optimization problems
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Introduction to Nonsmooth Optimization
- Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
- On Weak Subdifferentials, Directional Derivatives, and Radial Epiderivatives for Nonconvex Functions
- A Nonlinear Cone Separation Theorem and Scalarization in Nonconvex Vector Optimization
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- A generalization of Polyak's convergence result for subgradient optimization
- Nonlinear programming methods in the presence of noise
- A method for minimizing convex functions based on continuous approximations to the subdifferential
- A sharp augmented Lagrangian-based method in constrained non-convex optimization
- Minimizing Nonconvex Nonsmooth Functions via Cutting Planes and Proximity Control
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization
- A Method for Minimization of Quasidifferentiable Functions
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part I: General Level Methods
- The Efficiency of Subgradient Projection Methods for Convex Optimization, Part II: Implementations and Extensions
- Restricted Step and Levenberg–Marquardt Techniques in Proximal Bundle Methods for Nonconvex Nondifferentiable Optimization
- Convergence Analysis of Deflected Conditional Approximate Subgradient Methods
- Radial epiderivatives and set-valued optimization
- The modified subgradient algorithm based on feasible values
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- New limited memory bundle method for large-scale nonsmooth optimization
- Nonlinear Programming
- Minimization of unsmooth functionals
- Interior Gradient and Epsilon-Subgradient Descent Methods for Constrained Convex Minimization
- Convergence and efficiency of subgradient methods for quasiconvex minimization
- Gobally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization
This page was built for publication: Weak subgradient method for solving nonsmooth nonconvex optimization problems