Combination of steepest descent and BFGS methods for nonconvex nonsmooth optimization
From MaRDI portal
Publication:285034
DOI10.1007/S11075-015-0034-2zbMATH Open1338.49071OpenAlexW1169901681MaRDI QIDQ285034FDOQ285034
Authors: Rohollah Yousefpour
Publication date: 18 May 2016
Published in: Numerical Algorithms (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11075-015-0034-2
Recommendations
- Globally convergent BFGS method for nonsmooth convex optimization
- A globally convergent BFGS method with nonmonotone line search for non-convex minimization
- An adaptive gradient sampling algorithm for non-smooth optimization
- A trust region algorithm for nonsmooth optimization
- A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
Lipschitz functionsWolfe conditionsnonconvex nonsmooth optimizationnonsmooth BFGS methodnonsmooth line search method
Cites Work
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- Discrete gradient method: Derivative-free method for nonsmooth optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Benchmarking optimization software with performance profiles.
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning
- Line search algorithms with guaranteed sufficient decrease
- Title not available (Why is that?)
- Title not available (Why is that?)
- A bundle-Newton method for nonsmooth unconstrained minimization
- A family of variable metric proximal methods
- Methods of descent for nondifferentiable optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- A DC piecewise affine model and a bundling technique in nonconvex nonsmooth minimization
- A Globally and Superlinearly Convergent Algorithm for Nonsmooth Convex Minimization
- Generalized Bundle Methods
- Globally convergent limited memory bundle method for large-scale nonsmooth optimization
- Variable metric bundle methods: From conceptual to implementable forms
- Globally convergent variable metric method for convex nonsmooth unconstrained minimization
- Title not available (Why is that?)
- Continuous subdifferential approximations and their applications
- An effective nonsmooth optimization algorithm for locally Lipschitz functions
- Optimization of lipschitz continuous functions
- Nondifferential optimization via adaptive smoothing
- An efficient line search for nonlinear least squares
- Proximal quasi-Newton methods for nondifferentiable convex optimization
- On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating
- A quasi-second-order proximal bundle algorithm
- Piecewise linear approximations in nonconvex nonsmooth optimization
- Limited memory bundle method for large bound constrained nonsmooth optimization: convergence analysis
- Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
- Title not available (Why is that?)
- Finding the nearest point in A polytope
- An Algorithm for Constrained Optimization with Semismooth Functions
- Quasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
- Finding the Point of a Polyhedron Closest to the Origin
- Minimizing Nonconvex Nonsmooth Functions via Cutting Planes and Proximity Control
- Asymptotic Convergence Analysis of Some Inexact Proximal Point Algorithms for Minimization
- A Descent Numerical Method for Optimization Problems with Nondifferentiable Cost Functionals
- Gobally convergent variable metric method for nonconvex nondifferentiable unconstrained minimization
- Algorithms for finite and semi-infinite Min-Max-Min problems using adaptive smoothing techniques
- Computing proximal points of nonconvex functions
Cited In (6)
- An extension of the quasi-Newton method for minimizing locally Lipschitz functions
- Line search algorithms for locally Lipschitz functions on Riemannian manifolds
- An SQP method for minimization of locally Lipschitz functions with nonlinear constraints
- A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization
- A new descent algorithm using the three-step discretization method for solving unconstrained optimization problems
- Bregman distance regularization for nonsmooth and nonconvex optimization
Uses Software
This page was built for publication: Combination of steepest descent and BFGS methods for nonconvex nonsmooth optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q285034)