scientific article; zbMATH DE number 88930
From MaRDI portal
Publication:4016506
zbMATH Open0766.65051MaRDI QIDQ4016506FDOQ4016506
Authors: Jorge Nocedal
Publication date: 16 January 1993
Title of this publication is not available (Why is that?)
Recommendations
global convergencelarge scale optimizationunconstrained optimizationconjugate gradient methodNewton's methodquasi-Newton methodBFGS variable metric methodNelder-Meade method
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Cited In (81)
- Global convergence of conjugate gradient method
- A Hessian-free Newton-Raphson method for the configuration of physics systems featured by numerically asymmetric force field
- What, if anything, is new in optimization?
- Title not available (Why is that?)
- A Bregman extension of quasi-Newton updates. II: Analysis of robustness properties
- Optimization in ℝnby Coggin's method
- A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimization
- Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process
- On step-size estimation of line search methods
- The use of hypothetical points in numerical optimization
- An overview of nonlinear optimization
- Title not available (Why is that?)
- A noise-tolerant quasi-Newton algorithm for unconstrained optimization
- A proximal stochastic quasi-Newton algorithm with dynamical sampling and stochastic line search
- Modified globally convergent Polak-Ribière-Polyak conjugate gradient methods with self-correcting property for large-scale unconstrained optimization
- Diagonal approximation of the Hessian by finite differences for unconstrained optimization
- A class of gradient unconstrained minimization algorithms with adaptive stepsize
- Title not available (Why is that?)
- Globally convergent algorithms for solving unconstrained optimization problems
- Convergence of PRP method with new nonmonotone line search
- Smoothing methods for convex inequalities and linear complementarity problems
- A new trust region method with adaptive radius
- An adaptive scaled BFGS method for unconstrained optimization
- Modified nonmonotone Armijo line search for descent method
- A note on Kantorovich inequality for Hermite matrices
- Low cost optimization techniques for solving the nonlinear seismic reflection tomography problem
- Title not available (Why is that?)
- A functional optimization approach to an inverse magneto-convection problem.
- Conjugate gradient algorithm and fractals
- Gradient-type methods: a unified perspective in computer science and numerical analysis
- A double parameter scaled BFGS method for unconstrained optimization
- A modified PRP conjugate gradient method
- Modifying the BFGS method
- Exploiting Hessian matrix and trust-region algorithm in hyperparameters estimation of Gaussian process
- A new version of the Liu-Storey conjugate gradient method
- Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization
- Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimization
- Symbiosis between linear algebra and optimization
- Title not available (Why is that?)
- A class of nonmonotone conjugate gradient methods for unconstrained optimization
- Differential optimization techniques
- A descent hybrid conjugate gradient method based on the memoryless BFGS update
- A double parameter self-scaling memoryless BFGS method for unconstrained optimization
- New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems
- The convergence of subspace trust region methods
- Convergence and numerical results for a parallel asynchronous quasi- Newton method
- A new robust line search technique based on Chebyshev polynomials
- A structured quasi-Newton algorithm with nonmonotone search strategy for structured NLS problems and its application in robotic motion control
- A direct proof and a generalization for a Kantorovich type inequality
- How to deal with the unbounded in optimization: Theory and algorithms
- A method of trust region type for minimizing noisy functions
- A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization
- On the behaviour of a combined extra-updating/self-scaling BFGS method
- Convergence of the Polak-Ribiére-Polyak conjugate gradient method
- Title not available (Why is that?)
- Optimality tests for partitioning and sectional search algorithms
- Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems
- Convergence of line search methods for unconstrained optimization
- Using function-values in multi-step quasi-Newton methods
- Adaptive scaling damped BFGS method without gradient Lipschitz continuity
- Convergence of the descent Dai-Yuan conjugate gradient method for unconstrained optimization
- A preconditioned descent algorithm for variational inequalities of the second kind involving the \(p\)-Laplacian operator
- Some numerical methods for the study of the convexity notions arising in the calculus of variations
- A projected gradient and constraint linearization method for nonlinear model predictive control
- The cardiovascular system: mathematical modelling, numerical algorithms and clinical applications
- A perfect example for the BFGS method
- An algorithm for unconstrained optimization
- On conjugate gradient-like methods for eigen-like problems
- Variable metric methods for unconstrained optimization and nonlinear least squares
- Convergence Properties of Algorithms for Nonlinear Optimization
- A symmetric rank-one method based on extra updating techniques for unconstrained optimization
- The projection technique for two open problems of unconstrained optimization problems
- Sequential quadratic programming for large-scale nonlinear optimization
- A diagonal quasi-Newton updating method for unconstrained optimization
- A regularized limited memory BFGS method for nonconvex unconstrained minimization
- Nonmonotone adaptive trust region method
- New quasi-Newton methods via higher order tensor models
- A new class of nonmonotone conjugate gradient training algorithms
- Approximate Hessian for accelerated convergence of aerodynamic shape optimization problems in an adjoint-based framework
- Convergence and stability of line search methods for unconstrained optimization
- A new conjugate gradient algorithm for training neural networks based on a modified secant equation
Uses Software
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4016506)