A class of gradient unconstrained minimization algorithms with adaptive stepsize
From MaRDI portal
DOI10.1016/S0377-0427(99)00276-9zbMATH Open0958.65072WikidataQ126298369 ScholiaQ126298369MaRDI QIDQ1970409FDOQ1970409
Authors: Michael N. Vrahatis, G. S. Androulakis, J. N. Lambrinos, George D. Magoulas
Publication date: 10 April 2001
Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)
Recommendations
Cites Work
- OPTAC: A portable software package for analyzing and comparing optimization methods by visualization
- Testing Unconstrained Optimization Software
- Title not available (Why is that?)
- Function minimization by conjugate gradients
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- Title not available (Why is that?)
- Title not available (Why is that?)
- Optimization. Algorithms and consistent approximations
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Minimization of functions having Lipschitz continuous first partial derivatives
- On Steepest Descent
- Iterative Solution Methods
- Title not available (Why is that?)
- A Tool for the Analysis of Quasi-Newton Methods with Application to Unconstrained Minimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Iterative Methods for Solving Partial Difference Equations of Elliptic Type
- Title not available (Why is that?)
- A dimension-reducing method for unconstrained optimization
- A new unconstrained optimization method for imprecise function and gradient values
- The method of steepest descent for non-linear minimization problems
- Solution of Partial Differential Equations on Vector and Parallel Computers
- A Survey of Parallel Algorithms in Numerical Linear Algebra
- Cauchy's method of minimization
- An effective algorithm for minimization
- Title not available (Why is that?)
- Rates of Convergence for a Class of Iterative Procedures
- Title not available (Why is that?)
- Nonlinear successive over-relaxation
- On the acceleration of the backpropagation training method
- Title not available (Why is that?)
Cited In (40)
- Global optimization through a stochastic perturbation of the Polak-Ribière conjugate gradient method
- Modified nonmonotone Armijo line search for descent method
- Self-adaptive algorithms for quasiconvex programming and applications to machine learning
- Convergence of quasi-Newton method with new inexact line search
- A new gradient method with an optimal stepsize property
- Iterative parameter estimation algorithms for dual-frequency signal models
- Title not available (Why is that?)
- ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH
- An efficient gradient method with approximately optimal stepsizes based on regularization models for unconstrained optimization
- Artificial nonmonotonic neural networks
- Studying the performance of artificial neural networks on problems related to cryptography
- A Positive Barzilai–Borwein-Like Stepsize and an Extension for Symmetric Linear Systems
- The global convergence of the BFGS method with a modified WWP line search for nonconvex functions
- Convergence of descent method without line search
- Convergence of nonmonotone line search method
- A gradient-related algorithm with inexact line searches
- New stepsizes for the gradient method
- Accelerated multiple step-size methods for solving unconstrained optimization problems
- A new super-memory gradient method with curve search rule
- Convergence analysis of a modified BFGS method on convex minimizations
- Determining the number of real roots of polynomials through neural networks
- New line search methods for unconstrained optimization
- A new descent algorithm with curve search rule
- Discrete tomography with unknown intensity levels using higher-order statistics
- Accelerated gradient descent methods with line search
- A modified two-point stepsize gradient algorithm for unconstrained minimization
- Studying the basin of convergence of methods for computing periodic orbits
- A descent algorithm without line search for unconstrained optimization
- A new nonmonotone spectral residual method for nonsmooth nonlinear equations
- Global convergence of a modified Broyden family method for nonconvex functions
- Convergence of line search methods for unconstrained optimization
- Assessing the effectiveness of artificial neural networks on problems related to elliptic curve cryptography
- From linear to nonlinear iterative methods
- Non monotone backtracking inexact BFGS method for regression analysis
- A survey of gradient methods for solving nonlinear optimization
- On memory gradient method with trust region for unconstrained optimization
- Smoothed \(\ell_1\)-regularization-based line search for sparse signal recovery
- A modified nonmonotone BFGS algorithm for unconstrained optimization
- Convergence of descent method with new line search
- Multivariate spectral gradient method for unconstrained optimization
Uses Software
This page was built for publication: A class of gradient unconstrained minimization algorithms with adaptive stepsize
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1970409)