scientific article; zbMATH DE number 3529352
From MaRDI portal
Publication:4107408
zbMath0338.65038MaRDI QIDQ4107408
Publication date: 1976
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Rate of convergence, degree of approximation (41A25)
Related Items (only showing first 100 items - show all)
Hartley-type algebras in displacement and optimization strategies. ⋮ A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations ⋮ Forward-backward quasi-Newton methods for nonsmooth optimization problems ⋮ A modified Broyden family algorithm with global convergence under a weak Wolfe-Powell line search for unconstrained nonconvex problems ⋮ A descent hybrid conjugate gradient method based on the memoryless BFGS update ⋮ Modifying the BFGS update by a new column scaling technique ⋮ Family of optimally conditioned quasi-Newton updates for unconstrained optimization ⋮ Global convergence properties of the modified BFGS method associating with general line search model ⋮ A new quasi-Newton algorithm ⋮ Augmented Lagrangian approach for a bilateral free boundary problem ⋮ A parallel quasi-Newton algorithm for unconstrained optimization ⋮ Efficent line search algorithm for unconstrained optimization ⋮ Limited-memory BFGS with displacement aggregation ⋮ Variable metric bundle methods: From conceptual to implementable forms ⋮ An analysis of reduced Hessian methods for constrained optimization ⋮ A type of modified BFGS algorithm with any rank defects and the local \(Q\)-superlinear convergence properties ⋮ The global convergence of self-scaling BFGS algorithm with non-monotone line search for unconstrained nonconvex optimization problems ⋮ Modifying the BFGS method ⋮ Partitioning group correction Cholesky techniques for large scale sparse unconstrained optimization ⋮ How to deal with the unbounded in optimization: Theory and algorithms ⋮ A quasi-Newton method with Wolfe line searches for multiobjective optimization ⋮ The global convergence of the BFGS method with a modified WWP line search for nonconvex functions ⋮ An incomplete Hessian Newton minimization method and its application in a chemical database problem ⋮ Transformation of uniformly distributed particle ensembles ⋮ Parallel quasi-Newton methods for unconstrained optimization ⋮ Global convergence of a nonmonotone Broyden family method for nonconvex unconstrained minimization ⋮ The global convergence of a modified BFGS method for nonconvex functions ⋮ Global convergence of a modified Broyden family method for nonconvex functions ⋮ A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization ⋮ Hybrid method for nonlinear least-square problems without calculating derivatives ⋮ Nonsmooth optimization via quasi-Newton methods ⋮ A double parameter scaled BFGS method for unconstrained optimization ⋮ Global convergence of a modified limited memory BFGS method for non-convex minimization ⋮ Convergence and stability of line search methods for unconstrained optimization ⋮ The global convergence of the BFGS method under a modified Yuan-Wei-Lu line search technique ⋮ Some numerical experience with a globally convergent algorithm for nonlinearly constrained optimization ⋮ Über die globale Konvergenz von Variable-Metrik-Verfahren mit nicht- exakter Schrittweitenbestimmung ⋮ A new BFGS algorithm using the decomposition matrix of the correction matrix to obtain the search directions ⋮ Global convergence of a family of modified BFGS methods under a modified weak-Wolfe-Powell line search for nonconvex functions ⋮ Low complexity matrix projections preserving actions on vectors ⋮ The projection technique for two open problems of unconstrained optimization problems ⋮ The convergence of a new modified BFGS method without line searches for unconstrained optimization or complexity systems ⋮ Gradient-type methods: a unified perspective in computer science and numerical analysis ⋮ New quasi-Newton methods via higher order tensor models ⋮ An accelerated double step size model in unconstrained optimization ⋮ Minimizing a differentiable function over a differential manifold ⋮ A perfect example for the BFGS method ⋮ A modified Newton's method for minimizing factorable functions ⋮ A variable metric algorithm for unconstrained minimization without evaluation of derivatives ⋮ A new algorithm for box-constrained global optimization ⋮ On the limited memory BFGS method for large scale optimization ⋮ A modified secant equation quasi-Newton method for unconstrained optimization ⋮ Partitioned variable metric updates for large structured optimization problems ⋮ A combined class of self-scaling and modified quasi-Newton methods ⋮ Accelerated double direction method for solving unconstrained optimization problems ⋮ The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions ⋮ A Bregman extension of quasi-Newton updates. II: Analysis of robustness properties ⋮ Diagonal quasi-Newton methods via least change updating principle with weighted Frobenius norm ⋮ The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients ⋮ The equivalence of strict convexity and injectivity of the gradient in bounded level sets ⋮ Global convergence of a modified BFGS-type method for unconstrained non-convex minimization ⋮ The global and superlinear convergence of a new nonmonotone MBFGS algorithm on convex objective functions ⋮ A class of diagonal quasi-Newton methods for large-scale convex minimization ⋮ A modified BFGS method and its superlinear convergence in nonconvex minimization with general line search rule ⋮ Hybridization of accelerated gradient descent method ⋮ A new modified BFGS method for unconstrained optimization problems ⋮ A regularized limited memory BFGS method for nonconvex unconstrained minimization ⋮ Low complexity secant quasi-Newton minimization algorithms for nonconvex functions ⋮ Analysis of a self-scaling quasi-Newton method ⋮ New algorithms for linear programming ⋮ Global convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimization ⋮ Convergence property of a class of variable metric methods. ⋮ Extra-updates criterion for the limited memory BFGS algorithm for large scale nonlinear optimization ⋮ Adaptive matrix algebras in unconstrained minimization ⋮ A stochastic quasi-Newton method for simulation response optimization ⋮ Convergence analysis of a modified BFGS method on convex minimizations ⋮ A variation of Broyden class methods using Householder adaptive transforms ⋮ An acceleration of gradient descent algorithm with backtracking for unconstrained opti\-mi\-za\-tion ⋮ Implementing and modifying Broyden class updates for large scale optimization ⋮ Using nonlinear functions to approximate a new quasi-Newton method for unconstrained optimization problems ⋮ A Stochastic Quasi-Newton Method for Large-Scale Optimization ⋮ Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search ⋮ Mixing convex-optimization bounds for maximum-entropy sampling ⋮ A family of variable metric proximal methods ⋮ Numerical construction of spherical \(t\)-designs by Barzilai-Borwein method ⋮ A limited memory \(q\)-BFGS algorithm for unconstrained optimization problems ⋮ The revised DFP algorithm without exact line search ⋮ A comparison of nonlinear optimization methods for supervised learning in multilayer feedforward neural networks ⋮ A globalization procedure for solving nonlinear systems of equations ⋮ A globally convergent BFGS method with nonmonotone line search for non-convex minimization ⋮ Acceleration of conjugate gradient algorithms for unconstrained optimization ⋮ A modified BFGS algorithm based on a hybrid secant equation ⋮ Local convergence analysis for partitioned quasi-Newton updates ⋮ The convergence of Broyden algorithms for LC gradient function ⋮ A robust superlinearly convergent algorithm for linearly constrained optimization problems under degeneracy ⋮ The convergence of matrices generated by rank-2 methods from the restricted \(\beta\)-class of Broyden ⋮ A superlinearly convergent method to linearly constrained optimization problems under degeneracy ⋮ Nonsmoothness and a variable metric method ⋮ Recent advances in trust region algorithms ⋮ An adaptive sizing BFGS method for unconstrained optimization
This page was built for publication: