Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks (Q1040120): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2018027033 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multivariate nonlinear analysis and prediction of Shanghai stock market / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complex domain backpropagation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complex-valued neural networks. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Orthogonality of Decision Boundaries in Complex-Valued Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays / rank
 
Normal rank
Property / cites work
 
Property / cites work: On global exponential stability of discrete-time Hopfield neural networks with variable delays / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5652137 / rank
 
Normal rank

Latest revision as of 04:45, 2 July 2024

scientific article
Language Label Description Also known as
English
Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks
scientific article

    Statements

    Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks (English)
    0 references
    0 references
    0 references
    0 references
    23 November 2009
    0 references
    Summary: The batch split-complex backpropagation (BSCBP) algorithm for training complex-valued neural networks is considered. For constant learning rate, it is proved that the error function of BSCBP algorithm is monotone during the training iteration process, and the gradient of the error function tends to zero. By adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical analysis.
    0 references

    Identifiers