Two--parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length
From MaRDI portal
Publication:2009059
DOI10.1007/S11075-019-00658-1zbMATH Open1433.90157OpenAlexW2910211331MaRDI QIDQ2009059FDOQ2009059
Authors: Saman Babaie-Kafaki, Zohre Aminifard
Publication date: 27 November 2019
Published in: Numerical Algorithms (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11075-019-00658-1
Recommendations
- A double parameter self-scaling memoryless BFGS method for unconstrained optimization
- A hybrid scaling parameter for the scaled memoryless BFGS method based on the \(\ell_\infty\) matrix norm
- Scaled memoryless BFGS preconditioned steepest descent method for very large-scale unconstrained optimization
- A modified scaled memoryless BFGS preconditioned conjugate gradient algorithm for nonsmooth convex optimization
- Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
- A double parameter scaled BFGS method for unconstrained optimization
- A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization
- A modified scaling BFGS method for nonconvex minimization
- A modified scaling parameter for the memoryless BFGS updating formula
- On the limited memory BFGS method for large scale optimization
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53)
Cites Work
- Algorithm 851
- CUTEr and SifDec
- A note on performance profiles for benchmarking software
- Title not available (Why is that?)
- Benchmarking optimization software with performance profiles.
- Optimization theory and methods. Nonlinear programming
- Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization
- Scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
- Another hybrid conjugate gradient algorithm for unconstrained optimization
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- A nonlinear conjugate gradient method based on the MBFGS secant condition
- A modified BFGS method and its global convergence in nonconvex minimization
- A spectral conjugate gradient method for unconstrained optimization
- Title not available (Why is that?)
- Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
- A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization
- A modified scaling parameter for the memoryless BFGS updating formula
- Two modified scaled nonlinear conjugate gradient methods
- Optimal conditioning of self-scaling variable Metric algorithms
- A modified scaled conjugate gradient method with global convergence for nonconvex functions
- A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization
- On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae
- Two effective hybrid conjugate gradient algorithms based on modified BFGS updates
- Modifying the BFGS method
- An adaptive scaled BFGS method for unconstrained optimization
- A double parameter scaled BFGS method for unconstrained optimization
- A note on the global convergence theorem of accelerated adaptive Perry conjugate gradient methods
Cited In (13)
- A hybrid quasi-Newton method with application in sparse recovery
- A new self-scaling memoryless quasi-Newton update for unconstrained optimization
- A NONMONOTONE ADMM-BASED DIAGONAL QUASI-NEWTON UPDATE WITH APPLICATION TO THE COMPRESSIVE SENSING PROBLEM
- A scalar Hessian estimation with a sparse nonmonotone line search technique for the sparse recovery problem
- A double parameter scaled BFGS method for unconstrained optimization
- A nonmonotone adaptive trust region technique with a forgetting factor
- Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization
- A double parameter self-scaling memoryless BFGS method for unconstrained optimization
- A restart scheme for the memoryless BFGS method
- A unified derivative-free projection method model for large-scale nonlinear equations with convex constraints
- A new version of augmented self-scaling BFGS method
- Diagonally scaled memoryless quasi-Newton methods with application to compressed sensing
- Nonmonotone diagonally scaled limited-memory BFGS methods with application to compressive sensing based on a penalty model
Uses Software
This page was built for publication: Two--parameter scaled memoryless BFGS methods with a nonmonotone choice for the initial step length
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2009059)