A class of gradient unconstrained minimization algorithms with adaptive stepsize
From MaRDI portal
Recommendations
Cites work
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 46305 (Why is no real title available?)
- scientific article; zbMATH DE number 51537 (Why is no real title available?)
- scientific article; zbMATH DE number 88930 (Why is no real title available?)
- scientific article; zbMATH DE number 3473182 (Why is no real title available?)
- scientific article; zbMATH DE number 3526471 (Why is no real title available?)
- scientific article; zbMATH DE number 1215248 (Why is no real title available?)
- scientific article; zbMATH DE number 1086504 (Why is no real title available?)
- scientific article; zbMATH DE number 1356150 (Why is no real title available?)
- scientific article; zbMATH DE number 841441 (Why is no real title available?)
- scientific article; zbMATH DE number 3215568 (Why is no real title available?)
- scientific article; zbMATH DE number 3381785 (Why is no real title available?)
- scientific article; zbMATH DE number 3388498 (Why is no real title available?)
- A Survey of Parallel Algorithms in Numerical Linear Algebra
- A Tool for the Analysis of Quasi-Newton Methods with Application to Unconstrained Minimization
- A dimension-reducing method for unconstrained optimization
- A new unconstrained optimization method for imprecise function and gradient values
- An effective algorithm for minimization
- Cauchy's method of minimization
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- Function minimization by conjugate gradients
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Iterative Methods for Solving Partial Difference Equations of Elliptic Type
- Iterative Solution Methods
- Minimization of functions having Lipschitz continuous first partial derivatives
- Nonlinear successive over-relaxation
- OPTAC: A portable software package for analyzing and comparing optimization methods by visualization
- On Steepest Descent
- On the acceleration of the backpropagation training method
- Optimization. Algorithms and consistent approximations
- Rates of Convergence for a Class of Iterative Procedures
- Solution of Partial Differential Equations on Vector and Parallel Computers
- Testing Unconstrained Optimization Software
- The method of steepest descent for non-linear minimization problems
Cited in
(40)- Global optimization through a stochastic perturbation of the Polak-Ribière conjugate gradient method
- Modified nonmonotone Armijo line search for descent method
- Self-adaptive algorithms for quasiconvex programming and applications to machine learning
- Convergence of quasi-Newton method with new inexact line search
- A new gradient method with an optimal stepsize property
- Iterative parameter estimation algorithms for dual-frequency signal models
- scientific article; zbMATH DE number 7306906 (Why is no real title available?)
- ADAPTIVE ALGORITHMS FOR NEURAL NETWORK SUPERVISED LEARNING: A DETERMINISTIC OPTIMIZATION APPROACH
- Studying the performance of artificial neural networks on problems related to cryptography
- An efficient gradient method with approximately optimal stepsizes based on regularization models for unconstrained optimization
- Artificial nonmonotonic neural networks
- The global convergence of the BFGS method with a modified WWP line search for nonconvex functions
- A Positive Barzilai–Borwein-Like Stepsize and an Extension for Symmetric Linear Systems
- Convergence of descent method without line search
- A gradient-related algorithm with inexact line searches
- Convergence of nonmonotone line search method
- New stepsizes for the gradient method
- A new super-memory gradient method with curve search rule
- Convergence analysis of a modified BFGS method on convex minimizations
- Accelerated multiple step-size methods for solving unconstrained optimization problems
- Determining the number of real roots of polynomials through neural networks
- New line search methods for unconstrained optimization
- A new descent algorithm with curve search rule
- Discrete tomography with unknown intensity levels using higher-order statistics
- A modified two-point stepsize gradient algorithm for unconstrained minimization
- Accelerated gradient descent methods with line search
- Studying the basin of convergence of methods for computing periodic orbits
- A descent algorithm without line search for unconstrained optimization
- A new nonmonotone spectral residual method for nonsmooth nonlinear equations
- Convergence of line search methods for unconstrained optimization
- Global convergence of a modified Broyden family method for nonconvex functions
- Assessing the effectiveness of artificial neural networks on problems related to elliptic curve cryptography
- From linear to nonlinear iterative methods
- A survey of gradient methods for solving nonlinear optimization
- On memory gradient method with trust region for unconstrained optimization
- Non monotone backtracking inexact BFGS method for regression analysis
- Smoothed _1-regularization-based line search for sparse signal recovery
- A modified nonmonotone BFGS algorithm for unconstrained optimization
- Convergence of descent method with new line search
- Multivariate spectral gradient method for unconstrained optimization
This page was built for publication: A class of gradient unconstrained minimization algorithms with adaptive stepsize
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1970409)