Descent methods for composite nondifferentiable optimization problems
From MaRDI portal
Publication:3705231
DOI10.1007/BF01584377zbMath0581.90084MaRDI QIDQ3705231
Publication date: 1985
Published in: Mathematical Programming (Search for Journal in Brave)
epi-convergenceClarke subdifferentialdescent methodsGauss-NewtonArmijo stepsize\(\epsilon \) -subdifferentialcasting functionsglobally defined descent algorithmsModified Newtonnon-differentiable objective functionsVariable-Metric
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items (46)
A trust region algorithm for minimization of locally Lipschitzian functions ⋮ A proximal method for composite minimization ⋮ Epigraphical nesting: A unifying theory for the convergence of algorithms ⋮ Proximal methods avoid active strict saddles of weakly convex functions ⋮ A relative weighting method for estimating parameters and variances in multiple data sets ⋮ Iteration functions in some nonsmooth optimization algorithms ⋮ Stochastic Methods for Composite and Weakly Convex Optimization Problems ⋮ A Gauss-Newton method for convex composite optimization ⋮ A joint estimation approach to sparse additive ordinary differential equations ⋮ Optimality conditions for a class of composite multiobjective nonsmooth optimization problems ⋮ Unification of basic and composite nondifferentiable optimization ⋮ Generalized Kalman smoothing: modeling and algorithms ⋮ An Exact Penalty Method for Nonconvex Problems Covering, in Particular, Nonlinear Programming, Semidefinite Programming, and Second-Order Cone Programming ⋮ The multiproximal linearization method for convex composite problems ⋮ Riemannian linearized proximal algorithms for nonnegative inverse eigenvalue problem ⋮ Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods ⋮ Consistent approximations in composite optimization ⋮ Robust optimality and duality for composite uncertain multiobjective optimization in Asplund spaces with its applications ⋮ Second order necessary and sufficient conditions for convex composite NDO ⋮ Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications ⋮ Global convergence of a semi-infinite optimization method ⋮ Relax-and-split method for nonconvex inverse problems ⋮ The value function approach to convergence analysis in composite optimization ⋮ Gauss-Newton method for convex composite optimizations on Riemannian manifolds ⋮ Stochastic Model-Based Minimization of Weakly Convex Functions ⋮ Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria ⋮ Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems ⋮ Convex composite multi-objective nonsmooth programming ⋮ A coordinate gradient descent method for nonsmooth separable minimization ⋮ Smoothing methods for nonsmooth, nonconvex minimization ⋮ A nonlinear descent method for a variational inequality on a nonconvex set ⋮ On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications ⋮ Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods ⋮ Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method ⋮ Offline state estimation for hybrid systems via nonsmooth variable projection ⋮ Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems ⋮ Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence ⋮ Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity ⋮ A robust sequential quadratic programming method ⋮ Variable Metric Forward-Backward Algorithm for Composite Minimization Problems ⋮ Efficiency of minimizing compositions of convex functions and smooth maps ⋮ Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization ⋮ Manifold Sampling for $\ell_1$ Nonconvex Optimization ⋮ Recent advances in trust region algorithms ⋮ High-Order Optimization Methods for Fully Composite Problems ⋮ A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reflections on nondifferentiable optimization. I: Ball gradient
- Reflections on nondifferentiable optimization. II: Convergence
- Application of the Armijo stepsize rule to the solution of a nonlinear system of equalities and inequalities
- Discrete, non-linear approximation problems in polyhedral norms
- Conditions for Superlinear Convergence in l1 and l Solutions of Overdetermined Non-linear Equations
- On the global convergence of trust region algorithms for unconstrained minimization
- Optimization and nonsmooth analysis
- A Gauss-Newton Approach to Solving Generalized Inequalities
- Lipschitz $r$-continuity of the approximative subdifferential of a convex function.
- A global quadratic algorithm for solving a system of mixed equalities and inequalities
- A model algorithm for composite nondifferentiable optimization problems
- On the Extension of Constrained Optimization Algorithms from Differentiable to Nondifferentiable Problems
- Semismooth and Semiconvex Functions in Constrained Optimization
- Optimization of lipschitz continuous functions
- An Algorithm for Constrained Optimization with Semismooth Functions
- Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions
- Convergence Conditions for Ascent Methods
- Convex Analysis
This page was built for publication: Descent methods for composite nondifferentiable optimization problems