A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations
From MaRDI portal
Publication:2125067
DOI10.1007/s10589-022-00351-5zbMath1490.90282arXiv2101.04413OpenAlexW3119583575MaRDI QIDQ2125067
Hardik Tankaria, Shinji Sugimoto, Nobuo Yamashita
Publication date: 12 April 2022
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2101.04413
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization
- A regularized Newton method without line search for unconstrained optimization
- A new regularized limited memory BFGS-type method based on modified secant conditions for unconstrained optimization problems
- On the limited memory BFGS method for large scale optimization
- Representations of quasi-Newton matrices and their use in limited memory methods
- Nonmonotone trust region method for solving optimization problems
- On efficiently combining limited-memory and trust-region techniques
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- Computing a Trust Region Step
- Two-Point Step Size Gradient Methods
- Updating Quasi-Newton Matrices with Limited Storage
- Quasi-Newton Methods, Motivation and Theory
- Matrix conditioning and nonlinear optimization
- Numerical Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- CUTEr and SifDec
- Variable metric methods of minimisation
- Benchmarking optimization software with performance profiles.
This page was built for publication: A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations