Convergence Analysis of Three Classes of Split-Complex Gradient Algorithms for Complex-Valued Recurrent Neural Networks
From MaRDI portal
Publication:3057216
DOI10.1162/NECO_a_00021zbMath1208.68188WikidataQ51682049 ScholiaQ51682049MaRDI QIDQ3057216
Lijun Liu, Huisheng Zhang, Dong-po Xu
Publication date: 24 November 2010
Published in: Neural Computation (Search for Journal in Brave)
Related Items (3)
Convergence analysis of an augmented algorithm for fully complex-valued neural networks ⋮ A smoothing interval neural network ⋮ Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
Cites Work
- Convergence of gradient method for Eelman networks
- Convergence of gradient method for a fully recurrent neural network
- Complex-valued neural networks.
- Complex Valued Nonlinear Adaptive Filters
- A Complex-Valued RTRL Algorithm for Recurrent Neural Networks
- A fully adaptive normalized nonlinear gradient descent algorithm for complex-valued nonlinear adaptive filters
- Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update
This page was built for publication: Convergence Analysis of Three Classes of Split-Complex Gradient Algorithms for Complex-Valued Recurrent Neural Networks