Convergence of an online split-complex gradient algorithm for complex-valued neural networks (Q965772): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / Wikidata QID
 
Property / Wikidata QID: Q58650650 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2015044061 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Kriged Kalman Filter for Spatial Estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stability analysis of discrete Hopfield neural networks with the nonnegative definite monotone increasing weight function matrix / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic stability of neural networks with both Markovian jump parameters and continuously distributed delays / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complex domain backpropagation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Orthogonality of Decision Boundaries in Complex-Valued Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks / rank
 
Normal rank

Latest revision as of 19:11, 2 July 2024

scientific article
Language Label Description Also known as
English
Convergence of an online split-complex gradient algorithm for complex-valued neural networks
scientific article

    Statements

    Convergence of an online split-complex gradient algorithm for complex-valued neural networks (English)
    0 references
    0 references
    0 references
    0 references
    26 April 2010
    0 references
    Summary: The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.
    0 references
    online gradient method
    0 references
    adaptive learning
    0 references

    Identifiers