Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
From MaRDI portal
Publication:2234294
DOI10.1007/s00245-020-09718-8MaRDI QIDQ2234294
Hang-Tuan Nguyen, Tuyen Trung Truong
Publication date: 19 October 2021
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-020-09718-8
global convergence; local minimum; random dynamical systems; image classification; backtracking; gradient descent; deep neural networks; large scale optimisation; iterative optimisation; automation of learning rates
68Txx: Artificial intelligence
49Mxx: Numerical methods in optimal control
68Uxx: Computing methodologies and applications
65Kxx: Numerical methods for mathematical programming, optimization and variational techniques
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient methods of maximization
- Cauchy's method of minimization
- Introductory lectures on convex optimization. A basic course.
- Optimization and dynamical systems
- Minimization of functions having Lipschitz continuous first partial derivatives
- Numerical Optimization
- Gradient Convergence in Gradient methods with Errors
- Probabilistic Line Searches for Stochastic Optimization
- Gradient Descent Only Converges to Minimizers: Non-Isolated Critical Points and Invariant Regions
- Optimization Methods for Large-Scale Machine Learning
- Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Convergence Conditions for Ascent Methods
- Limit Points of Sequences in Metric Spaces
- A Stochastic Approximation Method
- The method of steepest descent for non-linear minimization problems
- Optimization