Using approximate secant equations in limited memory methods for multilevel unconstrained optimization
From MaRDI portal
Publication:429494
DOI10.1007/s10589-011-9393-3zbMath1245.90122OpenAlexW1995294269WikidataQ58185718 ScholiaQ58185718MaRDI QIDQ429494
Serge Gratton, Vincent Malmedy, Phillipe L. Toint
Publication date: 19 June 2012
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-011-9393-3
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Using approximate secant equations in limited memory methods for multilevel unconstrained optimization
- On the limited memory BFGS method for large scale optimization
- Approximate invariant subspaces and quasi-newton optimization methods
- A recursive Formula-trust-region method for bound-constrained nonlinear optimization
- Algorithm 851
- Numerical experience with a recursive trust-region method for multilevel nonlinear bound-constrained optimization
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Updating Quasi-Newton Matrices with Limited Storage
- A multigrid approach to discretized optimization problems
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- Function minimization by conjugate gradients
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- Conditioning of Quasi-Newton Methods for Function Minimization
- Methods of conjugate gradients for solving linear systems
- Benchmarking optimization software with performance profiles.
- An efficient hybrid conjugate gradient method for unconstrained optimization