Convergence Conditions for Ascent Methods. II: Some Corrections
From MaRDI portal
Publication:5619730
DOI10.1137/1013035zbMath0216.26901OpenAlexW2012960907WikidataQ56560320 ScholiaQ56560320MaRDI QIDQ5619730
Publication date: 1971
Published in: SIAM Review (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/1013035
Related Items (only showing first 100 items - show all)
Convergence of quasi-Newton method with new inexact line search ⋮ From linear to nonlinear iterative methods ⋮ A robust descent type algorithm for geophysical inversion through adaptive regularization ⋮ An efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart property ⋮ Accelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS update ⋮ Gaussian process regression for maximum entropy distribution ⋮ Accelerated memory-less SR1 method with generalized secant equation for unconstrained optimization ⋮ Some modified conjugate gradient methods for unconstrained optimization ⋮ Spectral method and its application to the conjugate gradient method ⋮ Two efficient modifications of AZPRP conjugate gradient method with sufficient descent property ⋮ A note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimization ⋮ STUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITS ⋮ Optimal control of bioprocess systems using hybrid numerical optimization algorithms ⋮ On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients ⋮ New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems ⋮ New inexact line search method for unconstrained optimization ⋮ A constrained optimization approach to solving certain systems of convex equations ⋮ Global convergence properties of the two new dependent Fletcher-Reeves conjugate gradient methods ⋮ Accelerated conjugate gradient algorithm with finite difference Hessian/vector product approximation for unconstrained optimization ⋮ A subclass of generating set search with convergence to second-order stationary points ⋮ Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functions ⋮ An efficient hybrid conjugate gradient method with the strong Wolfe-Powell line search ⋮ A new conjugate gradient algorithm with sufficient descent property for unconstrained optimization ⋮ A recalling-enhanced recurrent neural network: conjugate gradient learning algorithm and its convergence analysis ⋮ Constrained optimal control of switched systems based on modified BFGS algorithm and filled function method ⋮ A double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimization ⋮ New conjugacy condition and related new conjugate gradient methods for unconstrained optimization ⋮ Stochastic quasi-Newton with line-search regularisation ⋮ A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization ⋮ Constrained neural network training and its application to hyperelastic material modeling ⋮ Modifications of the Wolfe line search rules to satisfy second-order optimality conditions in unconstrained optimization ⋮ A double parameter scaled BFGS method for unconstrained optimization ⋮ An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix ⋮ Convergence and stability of line search methods for unconstrained optimization ⋮ A decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problems ⋮ Two modified scaled nonlinear conjugate gradient methods ⋮ A modified conjugate gradient method based on the self-scaling memoryless BFGS update ⋮ Diagonal approximation of the Hessian by finite differences for unconstrained optimization ⋮ A diagonal quasi-Newton updating method for unconstrained optimization ⋮ A link between the steepest descent method and fixed-point iterations ⋮ New conjugate gradient method for unconstrained optimization ⋮ New conjugate gradient algorithms based on self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno method ⋮ A double parameter self-scaling memoryless BFGS method for unconstrained optimization ⋮ A quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methods ⋮ A New Adaptive Conjugate Gradient Algorithm for Large-Scale Unconstrained Optimization ⋮ A new general form of conjugate gradient methods with guaranteed descent and strong global convergence properties ⋮ Assimilating data on the location of the free surface of a fluid flow to determine its viscosity ⋮ Smoothed \(\ell_1\)-regularization-based line search for sparse signal recovery ⋮ On the sufficient descent property of the Shanno's conjugate gradient method ⋮ Convergence of conjugate gradient methods with a closed-form stepsize formula ⋮ A note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by Andrei ⋮ An accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimization ⋮ Semi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance case ⋮ A sufficient descent LS conjugate gradient method for unconstrained optimization problems ⋮ The hybrid BFGS-CG method in solving unconstrained optimization problems ⋮ A class of gradient unconstrained minimization algorithms with adaptive stepsize ⋮ The global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradients ⋮ Cubic regularization in symmetric rank-1 quasi-Newton methods ⋮ An adaptive scaled BFGS method for unconstrained optimization ⋮ Scaled conjugate gradient algorithms for unconstrained optimization ⋮ Analysis of a self-scaling quasi-Newton method ⋮ Self-adaptive inexact proximal point methods ⋮ Modified nonmonotone Armijo line search for descent method ⋮ Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process ⋮ Accelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimization ⋮ Convergence of nonmonotone line search method ⋮ Convergence proof of minimization algorithms for nonconvex functions ⋮ Hybrid Riemannian conjugate gradient methods with global convergence properties ⋮ New accelerated conjugate gradient algorithms as a modification of Dai-Yuan's computational scheme for unconstrained optimization ⋮ On obtaining optimal well rates and placement for CO\(_2\) storage ⋮ On three-term conjugate gradient algorithms for unconstrained optimization ⋮ A new three-term conjugate gradient algorithm for unconstrained optimization ⋮ An efficient modified AZPRP conjugate gradient method for large-scale unconstrained optimization problem ⋮ Some sufficient descent conjugate gradient methods and their global convergence ⋮ Line search fixed point algorithms based on nonlinear conjugate gradient directions: application to constrained smooth convex optimization ⋮ A new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problems ⋮ Spectral three-term constrained conjugate gradient algorithm for function minimizations ⋮ Dynamic search trajectory methods for global optimization ⋮ A sufficient descent Liu–Storey conjugate gradient method and its global convergence ⋮ Sufficient descent Riemannian conjugate gradient methods ⋮ A class of globally convergent three-term Dai-Liao conjugate gradient methods ⋮ A globally and quadratically convergent algorithm with efficient implementation for unconstrained optimization ⋮ Hybrid conjugate gradient method for a convex optimization problem over the fixed-point set of a nonexpansive mapping ⋮ The revised DFP algorithm without exact line search ⋮ Projection onto a Polyhedron that Exploits Sparsity ⋮ Simultaneous reconstruction of the perfusion coefficient and initial temperature from time-average integral temperature measurements ⋮ A new accelerated conjugate gradient method for large-scale unconstrained optimization ⋮ A \(q\)-Polak-Ribière-Polyak conjugate gradient algorithm for unconstrained optimization problems ⋮ A hybrid-line-and-curve search globalization technique for inexact Newton methods ⋮ A truncated descent HS conjugate gradient method and its global convergence ⋮ Convergence properties of the Beale-Powell restart algorithm ⋮ A new type of quasi-Newton updating formulas based on the new quasi-Newton equation ⋮ A modified bat algorithm with conjugate gradient method for global optimization ⋮ On convergence of minimization methods: Attraction, repulsion, and selection ⋮ A new hybrid algorithm for convex nonlinear unconstrained optimization ⋮ Adaptive machine learning-based surrogate modeling to accelerate PDE-constrained optimization in enhanced oil recovery ⋮ Pseudospectral methods and iterative solvers for optimization problems from multiscale particle dynamics ⋮ A CLASS OF DFP ALGORITHMS WITH REVISED SEARCH DIRECTION ⋮ Convergence of descent method with new line search ⋮ Variable metric random pursuit
This page was built for publication: Convergence Conditions for Ascent Methods. II: Some Corrections