Perturbed steepest-descent technique in multiextremal problems
From MaRDI portal
Publication:1359471
DOI10.1007/BF02192289zbMath0871.90093OpenAlexW2049030952MaRDI QIDQ1359471
Publication date: 6 July 1997
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf02192289
global optimizationgradient methodstability under perturbationsLyapunov direct methodsteepest-descentperturbed steepest-descent technique
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nondifferential optimization via adaptive smoothing
- Global convergence and stabilization of unconstrained minimization methods without derivatives
- On the global optimization properties of finite-difference local descent algorithms
- Stability theory by Liapunov's direct method
- Stability theory for ordinary differential equations
- Derivative free analogues of the Levenberg-Marquardt and Gauss algorithms for nonlinear least squares approximation
- Truncated-Newton algorithms for large-scale unconstrained optimization
- A comparative study of several general convergence conditions for algorithms modeled by point-to-set maps
- Local Convergence of Difference Newton-Like Methods
- The Effect of Rounding Errors on Newton-like Methods
- Local Convergence of Inexact Newton Methods
- Inaccuracy in quasi-Newton methods: Local improvement theorems
- Stopping criteria for linesearch methods without derivatives
- Analogues of Dixon’s and Powell’s Theorems for Unconstrained Minimization with Inexact Line Searches
- Least-Change Sparse Secant Update Methods with Inaccurate Secant Conditions
- Approximate calculation of a pseudoinverse matrix using a generalized discrepancy principle
- Local properties of inexact methods for minimizing nonsmooth composite functions
- Inexact Newton Methods
- A note on a sufficient-decrease criterion for a non-derivative step-length procedure
- A Stability Analysis for Perturbed Nonlinear Iterative Methods
- Optimization of Globally Convex Functions
- Convergence properties of the gradient method under conditions of variable-level interference