A new unconstrained optimization method for imprecise function and gradient values
From MaRDI portal
Cited in
(16)- A class of gradient unconstrained minimization algorithms with adaptive stepsize
- A note on solution of nonlinear programming problems with imprecise function and gradient values
- Modified nonmonotone Armijo line search for descent method
- Convergence of quasi-Newton method with new inexact line search
- ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH
- A dimension-reducing method for unconstrained optimization
- Artificial nonmonotonic neural networks
- Locating, characterizing and computing the stationary points of a function
- New inexact line search method for unconstrained optimization
- Optimized explicit Runge-Kutta pair of orders \(9(8)\)
- Convergence of line search methods for unconstrained optimization
- Generalizations of the Intermediate Value Theorem for Approximating Fixed Points and Zeros of Continuous Functions
- From linear to nonlinear iterative methods
- OPTAC: A portable software package for analyzing and comparing optimization methods by visualization
- The non-monotone conic algorithm
- Convergence of descent method with new line search
This page was built for publication: A new unconstrained optimization method for imprecise function and gradient values
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1916739)