Descent methods for composite nondifferentiable optimization problems
DOI10.1007/BF01584377zbMATH Open0581.90084MaRDI QIDQ3705231FDOQ3705231
Authors: James V. Burke
Publication date: 1985
Published in: Mathematical Programming (Search for Journal in Brave)
Recommendations
- Methods of descent for nondifferentiable optimization
- scientific article; zbMATH DE number 3972657
- A Descent Numerical Method for Optimization Problems with Nondifferentiable Cost Functionals
- Descent methods for mixed variational inequalities with non-smooth mappings
- A descent algorithm for the nonlinear complementarity problems
- A descent algorithm for nonsmooth convex optimization
- scientific article; zbMATH DE number 7347554
- Publication:3479818
- scientific article; zbMATH DE number 4204134
- A descent method with linear programming subproblems for nondifferentiable convex optimization
Clarke subdifferentialepi-convergenceGauss-Newtondescent methodsArmijo stepsize\(\epsilon \) -subdifferentialcasting functionsglobally defined descent algorithmsModified Newtonnon-differentiable objective functionsVariable-Metric
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Cites Work
- Title not available (Why is that?)
- Convergence Conditions for Ascent Methods
- Convex Analysis
- Optimization and nonsmooth analysis
- Semismooth and Semiconvex Functions in Constrained Optimization
- Title not available (Why is that?)
- Optimization of lipschitz continuous functions
- An Algorithm for Constrained Optimization with Semismooth Functions
- On the global convergence of trust region algorithms for unconstrained minimization
- A Gauss-Newton Approach to Solving Generalized Inequalities
- Application of the Armijo stepsize rule to the solution of a nonlinear system of equalities and inequalities
- A global quadratic algorithm for solving a system of mixed equalities and inequalities
- A model algorithm for composite nondifferentiable optimization problems
- Lipschitz $r$-continuity of the approximative subdifferential of a convex function.
- On the Extension of Constrained Optimization Algorithms from Differentiable to Nondifferentiable Problems
- Title not available (Why is that?)
- Conditions for Superlinear Convergence in l1 and l Solutions of Overdetermined Non-linear Equations
- Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions
- Reflections on nondifferentiable optimization. I: Ball gradient
- Reflections on nondifferentiable optimization. II: Convergence
- Discrete, non-linear approximation problems in polyhedral norms
Cited In (51)
- A Levenberg-Marquardt method for nonsmooth regularized least squares
- A proximal method for composite minimization
- Second order necessary and sufficient conditions for convex composite NDO
- An algorithm for composite nonsmooth optimization problems
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- A robust sequential quadratic programming method
- A trust region algorithm for minimization of locally Lipschitzian functions
- Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- Generalized Kalman smoothing: modeling and algorithms
- Epigraphical nesting: A unifying theory for the convergence of algorithms
- Title not available (Why is that?)
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Stochastic model-based minimization of weakly convex functions
- Gauss-Newton method for convex composite optimizations on Riemannian manifolds
- An exact penalty method for nonconvex problems covering, in particular, nonlinear programming, semidefinite programming, and second-order cone programming
- Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems
- Proximal methods avoid active strict saddles of weakly convex functions
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- A nonlinear descent method for a variational inequality on a nonconvex set
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- Variable Metric Forward-Backward Algorithm for Composite Minimization Problems
- Efficiency of minimizing compositions of convex functions and smooth maps
- Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems
- Unification of basic and composite nondifferentiable optimization
- High-order optimization methods for fully composite problems
- Title not available (Why is that?)
- Robust optimality and duality for composite uncertain multiobjective optimization in Asplund spaces with its applications
- Manifold sampling for \(\ell_1\) nonconvex optimization
- A Gauss-Newton method for convex composite optimization
- The value function approach to convergence analysis in composite optimization
- A joint estimation approach to sparse additive ordinary differential equations
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- Global convergence of a semi-infinite optimization method
- A relative weighting method for estimating parameters and variances in multiple data sets
- Convex composite multi-objective nonsmooth programming
- Consistent approximations in composite optimization
- Recent advances in trust region algorithms
- Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications
- The multiproximal linearization method for convex composite problems
- Relax-and-split method for nonconvex inverse problems
- Title not available (Why is that?)
- Iteration functions in some nonsmooth optimization algorithms
- Optimality conditions for a class of composite multiobjective nonsmooth optimization problems
- Offline state estimation for hybrid systems via nonsmooth variable projection
- Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization
- Riemannian linearized proximal algorithms for nonnegative inverse eigenvalue problem
- Strong metric (sub)regularity of Karush-Kuhn-Tucker mappings for piecewise linear-quadratic convex-composite optimization and the quadratic convergence of Newton's method
- A study of convex convex-composite functions via infimal convolution with applications
- Smoothing methods for nonsmooth, nonconvex minimization
- A coordinate gradient descent method for nonsmooth separable minimization
This page was built for publication: Descent methods for composite nondifferentiable optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3705231)