Lyapunov stability analysis of gradient descent-learning algorithm in network training
DOI10.5402/2011/145801zbMATH Open1238.93069OpenAlexW2159765810WikidataQ58688745 ScholiaQ58688745MaRDI QIDQ420144FDOQ420144
Authors: Ahmad Banakar
Publication date: 21 May 2012
Published in: ISRN Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.5402/2011/145801
Recommendations
- A multilayer neural network. II: Stability of its learning processes
- The convergence of stochastic gradient algorithms applied to learning in neural networks
- A class of asymptotically stable algorithms for learning-rate adaptation
- Analysis of gradient descent learning algorithms for multilayer feedforward neural networks
- scientific article; zbMATH DE number 597656
Learning and adaptive systems in artificial intelligence (68T05) Lyapunov and other classical stabilities (Lagrange, Poisson, (L^p, l^p), etc.) in control theory (93D05)
Cites Work
- Fuzzy identification of systems and its applications to modeling and control
- Generalized predictive control based on self-recurrent wavelet neural network for stable path tracking of mobile robots: adaptive learning rates approach
- Mechanical system modelling using recurrent neural networks via quasi- Newton learning methods
Cited In (5)
- A multilayer neural network. II: Stability of its learning processes
- Stabilizing and robustifying the learning mechanisms of artificial neural networks in control engineering applications
- Adaptive learning algorithm convergence in passive and reactive environments
- On delay independent stabilization analysis for a class of switched large-scale time-delay systems
- A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Uses Software
This page was built for publication: Lyapunov stability analysis of gradient descent-learning algorithm in network training
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q420144)