A recalling-enhanced recurrent neural network: conjugate gradient learning algorithm and its convergence analysis
From MaRDI portal
Publication:2660959
DOI10.1016/j.ins.2020.01.045zbMath1457.68233OpenAlexW3003573324WikidataQ120500310 ScholiaQ120500310MaRDI QIDQ2660959
Publication date: 31 March 2021
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ins.2020.01.045
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A new class of nonlinear conjugate gradient coefficients with global convergence properties
- Efficient generalized conjugate gradient algorithms. I: Theory
- Convergence of gradient method for Eelman networks
- Assisted history matching for the inversion of fractures based on discrete fracture-matrix model with different combinations of inversion parameters
- Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: deterministic convergence and its application
- Minimization of functions having Lipschitz continuous first partial derivatives
- Generating chaos by an Elman network
- Training of Elman networks and dynamic system modelling
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- Function minimization by conjugate gradients
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- A spectral conjugate gradient method for unconstrained optimization