A class of gradient unconstrained minimization algorithms with adaptive stepsize

From MaRDI portal
Revision as of 16:38, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1970409

DOI10.1016/S0377-0427(99)00276-9zbMath0958.65072WikidataQ126298369 ScholiaQ126298369MaRDI QIDQ1970409

G. S. Androulakis, Michael N. Vrahatis, J. N. Lambrinos, George D. Magoulas

Publication date: 10 April 2001

Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)




Related Items (36)

Convergence of quasi-Newton method with new inexact line searchFrom linear to nonlinear iterative methodsConvergence of line search methods for unconstrained optimizationSTUDYING THE BASIN OF CONVERGENCE OF METHODS FOR COMPUTING PERIODIC ORBITSIterative parameter estimation algorithms for dual-frequency signal modelsNon Monotone Backtracking Inexact BFGS Method for Regression AnalysisA Positive Barzilai–Borwein-Like Stepsize and an Extension for Symmetric Linear SystemsA new nonmonotone spectral residual method for nonsmooth nonlinear equationsThe global convergence of the BFGS method with a modified WWP line search for nonconvex functionsStudying the performance of artificial neural networks on problems related to cryptographyA modified nonmonotone BFGS algorithm for unconstrained optimizationA gradient-related algorithm with inexact line searchesGlobal convergence of a modified Broyden family method for nonconvex functionsDiscrete tomography with unknown intensity levels using higher-order statisticsSmoothed \(\ell_1\)-regularization-based line search for sparse signal recoveryA survey of gradient methods for solving nonlinear optimizationNew stepsizes for the gradient methodMultivariate spectral gradient method for unconstrained optimizationGlobal optimization through a stochastic perturbation of the Polak-Ribière conjugate gradient methodAssessing the effectiveness of artificial neural networks on problems related to elliptic curve cryptographyModified nonmonotone Armijo line search for descent methodA new descent algorithm with curve search ruleOn memory gradient method with trust region for unconstrained optimizationConvergence of nonmonotone line search methodConvergence analysis of a modified BFGS method on convex minimizationsADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACHA new gradient method with an optimal stepsize propertyDetermining the number of real roots of polynomials through neural networksArtificial nonmonotonic neural networksNew line search methods for unconstrained optimizationAccelerated gradient descent methods with line searchA descent algorithm without line search for unconstrained optimizationConvergence of descent method without line searchA new super-memory gradient method with curve search ruleAccelerated multiple step-size methods for solving unconstrained optimization problemsConvergence of descent method with new line search


Uses Software



Cites Work




This page was built for publication: A class of gradient unconstrained minimization algorithms with adaptive stepsize