Convergence of an online split-complex gradient algorithm for complex-valued neural networks (Q965772)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Convergence of an online split-complex gradient algorithm for complex-valued neural networks |
scientific article; zbMATH DE number 5701496
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | Convergence of an online split-complex gradient algorithm for complex-valued neural networks |
scientific article; zbMATH DE number 5701496 |
Statements
Convergence of an online split-complex gradient algorithm for complex-valued neural networks (English)
0 references
26 April 2010
0 references
Summary: The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.
0 references
online gradient method
0 references
adaptive learning
0 references
0 references
0.8689672350883484
0 references
0.8551889657974243
0 references
0.8103288412094116
0 references
0.7921738624572754
0 references