Convergence Conditions for Ascent Methods. II: Some Corrections

From MaRDI portal
Revision as of 04:02, 7 March 2024 by Import240305080351 (talk | contribs) (Created automatically from import240305080351)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5619730

DOI10.1137/1013035zbMath0216.26901OpenAlexW2012960907WikidataQ56560320 ScholiaQ56560320MaRDI QIDQ5619730

Philip Wolfe

Publication date: 1971

Published in: SIAM Review (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/1013035




Related Items (only showing first 100 items - show all)

Convergence of quasi-Newton method with new inexact line searchFrom linear to nonlinear iterative methodsA robust descent type algorithm for geophysical inversion through adaptive regularizationAn efficient modification of the Hestenes-Stiefel nonlinear conjugate gradient method with restart propertyAccelerated adaptive Perry conjugate gradient algorithms based on the self-scaling memoryless BFGS updateGaussian process regression for maximum entropy distributionAccelerated memory-less SR1 method with generalized secant equation for unconstrained optimizationSome modified conjugate gradient methods for unconstrained optimizationSpectral method and its application to the conjugate gradient methodTwo efficient modifications of AZPRP conjugate gradient method with sufficient descent propertyA note on memory-less SR1 and memory-less BFGS methods for large-scale unconstrained optimizationSTUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITSOptimal control of bioprocess systems using hybrid numerical optimization algorithmsOn the global convergence rate of the gradient descent method for functions with Hölder continuous gradientsNew nonlinear conjugate gradient formulas for large-scale unconstrained optimization problemsNew inexact line search method for unconstrained optimizationA constrained optimization approach to solving certain systems of convex equationsGlobal convergence properties of the two new dependent Fletcher-Reeves conjugate gradient methodsAccelerated conjugate gradient algorithm with finite difference Hessian/vector product approximation for unconstrained optimizationA subclass of generating set search with convergence to second-order stationary pointsAdaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functionsAn efficient hybrid conjugate gradient method with the strong Wolfe-Powell line searchA new conjugate gradient algorithm with sufficient descent property for unconstrained optimizationA recalling-enhanced recurrent neural network: conjugate gradient learning algorithm and its convergence analysisConstrained optimal control of switched systems based on modified BFGS algorithm and filled function methodA double-parameter scaling Broyden-Fletcher-Goldfarb-Shanno method based on minimizing the measure function of Byrd and Nocedal for unconstrained optimizationNew conjugacy condition and related new conjugate gradient methods for unconstrained optimizationStochastic quasi-Newton with line-search regularisationA scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimizationConstrained neural network training and its application to hyperelastic material modelingModifications of the Wolfe line search rules to satisfy second-order optimality conditions in unconstrained optimizationA double parameter scaled BFGS method for unconstrained optimizationAn adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrixConvergence and stability of line search methods for unconstrained optimizationA decent three term conjugate gradient method with global convergence properties for large scale unconstrained optimization problemsTwo modified scaled nonlinear conjugate gradient methodsA modified conjugate gradient method based on the self-scaling memoryless BFGS updateDiagonal approximation of the Hessian by finite differences for unconstrained optimizationA diagonal quasi-Newton updating method for unconstrained optimizationA link between the steepest descent method and fixed-point iterationsNew conjugate gradient method for unconstrained optimizationNew conjugate gradient algorithms based on self-scaling memoryless Broyden-Fletcher-Goldfarb-Shanno methodA double parameter self-scaling memoryless BFGS method for unconstrained optimizationA quadratic hybridization of Polak-Ribière-Polyak and Fletcher-Reeves conjugate gradient methodsA New Adaptive Conjugate Gradient Algorithm for Large-Scale Unconstrained OptimizationA new general form of conjugate gradient methods with guaranteed descent and strong global convergence propertiesAssimilating data on the location of the free surface of a fluid flow to determine its viscositySmoothed \(\ell_1\)-regularization-based line search for sparse signal recoveryOn the sufficient descent property of the Shanno's conjugate gradient methodConvergence of conjugate gradient methods with a closed-form stepsize formulaA note on the global convergence theorem of the scaled conjugate gradient algorithms proposed by AndreiAn accelerated subspace minimization three-term conjugate gradient algorithm for unconstrained optimizationSemi-discrete optimal transport: a solution procedure for the unsquared Euclidean distance caseA sufficient descent LS conjugate gradient method for unconstrained optimization problemsThe hybrid BFGS-CG method in solving unconstrained optimization problemsA class of gradient unconstrained minimization algorithms with adaptive stepsizeThe global convergence of partitioned BFGS on problems with convex decompositions and Lipschitzian gradientsCubic regularization in symmetric rank-1 quasi-Newton methodsAn adaptive scaled BFGS method for unconstrained optimizationScaled conjugate gradient algorithms for unconstrained optimizationAnalysis of a self-scaling quasi-Newton methodSelf-adaptive inexact proximal point methodsModified nonmonotone Armijo line search for descent methodImproved sign-based learning algorithm derived by the composite nonlinear Jacobi processAccelerated scaled memoryless BFGS preconditioned conjugate gradient algorithm for unconstrained optimizationConvergence of nonmonotone line search methodConvergence proof of minimization algorithms for nonconvex functionsHybrid Riemannian conjugate gradient methods with global convergence propertiesNew accelerated conjugate gradient algorithms as a modification of Dai-Yuan's computational scheme for unconstrained optimizationOn obtaining optimal well rates and placement for CO\(_2\) storageOn three-term conjugate gradient algorithms for unconstrained optimizationA new three-term conjugate gradient algorithm for unconstrained optimizationAn efficient modified AZPRP conjugate gradient method for large-scale unconstrained optimization problemSome sufficient descent conjugate gradient methods and their global convergenceLine search fixed point algorithms based on nonlinear conjugate gradient directions: application to constrained smooth convex optimizationA new CG algorithm based on a scaled memoryless BFGS update with adaptive search strategy, and its application to large-scale unconstrained optimization problemsSpectral three-term constrained conjugate gradient algorithm for function minimizationsDynamic search trajectory methods for global optimizationA sufficient descent Liu–Storey conjugate gradient method and its global convergenceSufficient descent Riemannian conjugate gradient methodsA class of globally convergent three-term Dai-Liao conjugate gradient methodsA globally and quadratically convergent algorithm with efficient implementation for unconstrained optimizationHybrid conjugate gradient method for a convex optimization problem over the fixed-point set of a nonexpansive mappingThe revised DFP algorithm without exact line searchProjection onto a Polyhedron that Exploits SparsitySimultaneous reconstruction of the perfusion coefficient and initial temperature from time-average integral temperature measurementsA new accelerated conjugate gradient method for large-scale unconstrained optimizationA \(q\)-Polak-Ribière-Polyak conjugate gradient algorithm for unconstrained optimization problemsA hybrid-line-and-curve search globalization technique for inexact Newton methodsA truncated descent HS conjugate gradient method and its global convergenceConvergence properties of the Beale-Powell restart algorithmA new type of quasi-Newton updating formulas based on the new quasi-Newton equationA modified bat algorithm with conjugate gradient method for global optimizationOn convergence of minimization methods: Attraction, repulsion, and selectionA new hybrid algorithm for convex nonlinear unconstrained optimizationAdaptive machine learning-based surrogate modeling to accelerate PDE-constrained optimization in enhanced oil recoveryPseudospectral methods and iterative solvers for optimization problems from multiscale particle dynamicsA CLASS OF DFP ALGORITHMS WITH REVISED SEARCH DIRECTIONConvergence of descent method with new line searchVariable metric random pursuit







This page was built for publication: Convergence Conditions for Ascent Methods. II: Some Corrections