A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
DOI10.1007/S12532-015-0086-2zbMATH Open1333.49042OpenAlexW902317526MaRDI QIDQ903922FDOQ903922
Authors: Frank E. Curtis, Xiaocun Que
Publication date: 15 January 2016
Published in: Mathematical Programming Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s12532-015-0086-2
Recommendations
- An adaptive gradient sampling algorithm for non-smooth optimization
- An approximate subgradient algorithm for unconstrained nonsmooth, nonconvex optimization
- A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- A direct search quasi-Newton method for nonsmooth unconstrained optimization
nonconvex optimizationunconstrained optimizationnonsmooth optimizationquasi-Newton methodsline search methodsgradient sampling
Numerical mathematical programming methods (65K05) Numerical optimization and variational techniques (65K10) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53) Newton-type methods (49M15) Numerical methods based on necessary conditions (49M05)
Cites Work
- A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization
- New limited memory bundle method for large-scale nonsmooth optimization
- Numerical optimization. Theoretical and practical aspects. Transl. from the French
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Title not available (Why is that?)
- Benchmarking optimization software with performance profiles.
- Updating Quasi-Newton Matrices with Limited Storage
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- Convex optimization theory.
- Compressed sensing
- Smoothing methods for nonsmooth, nonconvex minimization
- Methods of descent for nondifferentiable optimization
- Optimization and nonsmooth analysis
- Title not available (Why is that?)
- Title not available (Why is that?)
- Nonsmooth optimization via quasi-Newton methods
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- Robust optimization-methodology and applications
- Optimization of lipschitz continuous functions
- Approximating Subdifferentials by Random Sampling of Gradients
- Convergence of the Gradient Sampling Algorithm for Nonsmooth Nonconvex Optimization
- An Algorithm for Constrained Optimization with Semismooth Functions
- Robust Portfolio Selection Problems
- Optimality conditions and a smoothing trust region Newton method for nonlipschitz optimization
- A derivative-free approximate gradient sampling algorithm for finite minimax problems
- Data fitting problems with bounded uncertainties in the data
- A sequential quadratic programming algorithm for nonconvex, nonsmooth constrained optimization
- A nonderivative version of the gradient sampling algorithm for nonsmooth nonconvex optimization
- A Method for Solving Certain Quadratic Programming Problems Arising in Nonsmooth Optimization
- Derivative-free optimization methods for finite minimax problems
- An adaptive gradient sampling algorithm for non-smooth optimization
- A nonsmooth optimisation approach for the stabilisation of time-delay systems
- The smoothed spectral abscissa for robust stability optimization
- Minimizing the Condition Number for Small Rank Modifications
Cited In (20)
- A feasible SQP-GS algorithm for nonconvex, nonsmooth constrained optimization
- A new sequential optimality condition for constrained nonsmooth optimization
- A BFGS-SQP method for nonsmooth, nonconvex, constrained optimization and its evaluation using relative minimization profiles
- Solving an inverse heat convection problem with an implicit forward operator by using a projected quasi-Newton method
- Combination of steepest descent and BFGS methods for nonconvex nonsmooth optimization
- A limited-memory quasi-Newton algorithm for bound-constrained non-smooth optimization
- Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem
- A fast gradient and function sampling method for finite-max functions
- An SL/QP algorithm for minimizing the spectral abscissa of time delay systems
- Manifold sampling for optimization of nonconvex functions that are piecewise linear compositions of smooth components
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning
- Limited-memory BFGS with displacement aggregation
- A hierarchy of spectral relaxations for polynomial optimization
- A Sequential Quadratic Programming Algorithm for Nonsmooth Problems with Upper- \({\boldsymbol{\mathcal{C}^2}}\) Objective
- A note on the convergence of deterministic gradient sampling in nonsmooth optimization
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- A geometric integration approach to nonsmooth, nonconvex optimisation
- Title not available (Why is that?)
- Nonconvex piecewise-quadratic underestimation for global minimization
Uses Software
This page was built for publication: A quasi-Newton algorithm for nonconvex, nonsmooth optimization with global convergence guarantees
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q903922)