Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
DOI10.1007/S00245-020-09718-8OpenAlexW3083378728MaRDI QIDQ2234294FDOQ2234294
Authors: Hang-Tuan Nguyen, Tuyen Trung Truong
Publication date: 19 October 2021
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-020-09718-8
global convergencerandom dynamical systemsbacktrackinggradient descentimage classificationlocal minimumdeep neural networkslarge scale optimisationiterative optimisationautomation of learning rates
Numerical methods in optimal control (49Mxx) Artificial intelligence (68Txx) Numerical methods for mathematical programming, optimization and variational techniques (65Kxx) Computing methodologies and applications (68Uxx)
Cites Work
- Numerical Optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- A Stochastic Approximation Method
- Optimization
- Convergence Conditions for Ascent Methods
- Title not available (Why is that?)
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Minimization of functions having Lipschitz continuous first partial derivatives
- Gradient Convergence in Gradient methods with Errors
- Optimization and dynamical systems
- The method of steepest descent for non-linear minimization problems
- Gradient methods of maximization
- Cauchy's method of minimization
- Optimization methods for large-scale machine learning
- Gradient descent only converges to minimizers: non-isolated critical points and invariant regions
- Probabilistic line searches for stochastic optimization
- Analysis of the gradient method with an Armijo-Wolfe line search on a class of non-smooth convex functions
- Limit Points of Sequences in Metric Spaces
Cited In (6)
- Inertial Newton algorithms avoiding strict saddle points
- On the representation and learning of monotone triangular transport maps
- Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks
- Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients
- A fast and simple modification of Newton's method avoiding saddle points
- Title not available (Why is that?)
Uses Software
This page was built for publication: Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2234294)