scientific article
From MaRDI portal
Publication:4016506
zbMath0766.65051MaRDI QIDQ4016506
Publication date: 16 January 1993
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
unconstrained optimizationglobal convergenceNewton's methodconjugate gradient methodquasi-Newton methodlarge scale optimizationBFGS variable metric methodNelder-Meade method
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Related Items (62)
Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimization ⋮ A method of trust region type for minimizing noisy functions ⋮ Convergence of line search methods for unconstrained optimization ⋮ A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimization ⋮ A descent hybrid conjugate gradient method based on the memoryless BFGS update ⋮ Convergence and numerical results for a parallel asynchronous quasi- Newton method ⋮ A new robust line search technique based on Chebyshev polynomials ⋮ A new version of the Liu-Storey conjugate gradient method ⋮ New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems ⋮ Approximate Hessian for accelerated convergence of aerodynamic shape optimization problems in an adjoint-based framework ⋮ The convergence of subspace trust region methods ⋮ Modifying the BFGS method ⋮ A Projected Gradient and Constraint Linearization Method for Nonlinear Model Predictive Control ⋮ A preconditioned descent algorithm for variational inequalities of the second kind involving the \(p\)-Laplacian operator ⋮ Convergence of the Polak-Ribiére-Polyak conjugate gradient method ⋮ A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization ⋮ Using function-values in multi-step quasi-Newton methods ⋮ On conjugate gradient-like methods for eigen-like problems ⋮ Smoothing methods for convex inequalities and linear complementarity problems ⋮ A double parameter scaled BFGS method for unconstrained optimization ⋮ Convergence and stability of line search methods for unconstrained optimization ⋮ Diagonal approximation of the Hessian by finite differences for unconstrained optimization ⋮ A diagonal quasi-Newton updating method for unconstrained optimization ⋮ Nonmonotone adaptive trust region method ⋮ A double parameter self-scaling memoryless BFGS method for unconstrained optimization ⋮ The cardiovascular system: Mathematical modelling, numerical algorithms and clinical applications ⋮ Modified globally convergent Polak-Ribière-Polyak conjugate gradient methods with self-correcting property for large-scale unconstrained optimization ⋮ The projection technique for two open problems of unconstrained optimization problems ⋮ New quasi-Newton methods via higher order tensor models ⋮ An overview of nonlinear optimization ⋮ A new conjugate gradient algorithm for training neural networks based on a modified secant equation ⋮ A perfect example for the BFGS method ⋮ A symmetric rank-one method based on extra updating techniques for unconstrained optimization ⋮ A class of gradient unconstrained minimization algorithms with adaptive stepsize ⋮ A Bregman extension of quasi-Newton updates. II: Analysis of robustness properties ⋮ A Hessian-free Newton-Raphson method for the configuration of physics systems featured by numerically asymmetric force field ⋮ A new class of nonmonotone conjugate gradient training algorithms ⋮ Convergence of PRP method with new nonmonotone line search ⋮ An adaptive scaled BFGS method for unconstrained optimization ⋮ A regularized limited memory BFGS method for nonconvex unconstrained minimization ⋮ A functional optimization approach to an inverse magneto-convection problem. ⋮ Modified nonmonotone Armijo line search for descent method ⋮ A note on Kantorovich inequality for Hermite matrices ⋮ Some numerical methods for the study of the convexity notions arising in the calculus of variations ⋮ Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization ⋮ Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process ⋮ On step-size estimation of line search methods ⋮ Exploiting Hessian matrix and trust-region algorithm in hyperparameters estimation of Gaussian process ⋮ Conjugate gradient algorithm and fractals ⋮ A direct proof and a generalization for a Kantorovich type inequality ⋮ On the behaviour of a combined extra-updating/self-scaling BFGS method ⋮ Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems ⋮ A structured quasi-Newton algorithm with nonmonotone search strategy for structured NLS problems and its application in robotic motion control ⋮ Low cost optimization techniques for solving the nonlinear seismic reflection tomography problem ⋮ Convergence of the descent Dai–Yuan conjugate gradient method for unconstrained optimization ⋮ Global convergence of conjugate gradient method ⋮ Adaptive scaling damped BFGS method without gradient Lipschitz continuity ⋮ A modified PRP conjugate gradient method ⋮ A new trust region method with adaptive radius ⋮ A class of nonmonotone conjugate gradient methods for unconstrained optimization ⋮ Symbiosis between linear algebra and optimization ⋮ Sequential quadratic programming for large-scale nonlinear optimization
Uses Software
This page was built for publication: