A gradient-based continuous method for large-scale optimization problems
From MaRDI portal
Publication:1781969
DOI10.1007/s10898-004-5700-1zbMath1090.90147OpenAlexW2080637608MaRDI QIDQ1781969
Liqun Qi, Li-Zhi Liao, Hon-Wah Tam
Publication date: 9 June 2005
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10898-004-5700-1
Large-scale problems in mathematical programming (90C06) Interior-point methods (90C51) Nonlinear ordinary differential equations and systems (34A34)
Related Items (6)
Convergence analysis of a global optimization algorithm using stochastic differential equations ⋮ On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients ⋮ Linearly constrained global optimization and stochastic differential equations ⋮ Identification of flexural rigidity in a Kirchhoff plates model using a convex objective and continuous Newton method ⋮ A Kronecker approximation with a convex constrained optimization method for blind image restoration ⋮ A new trust region method with adaptive radius
Cites Work
- Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Ordinary Differential Equations
- Testing Unconstrained Optimization Software
- Neural networks and physical systems with emergent collective computational abilities.
- Stability analysis of gradient-based neural networks for optimization problems
This page was built for publication: A gradient-based continuous method for large-scale optimization problems