Learning with recurrent neural networks (Q1125204): Difference between revisions
From MaRDI portal
Added link to MaRDI item. |
Set profile property. |
||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank |
Latest revision as of 03:17, 5 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Learning with recurrent neural networks |
scientific article |
Statements
Learning with recurrent neural networks (English)
0 references
6 December 1999
0 references
Human brain is one of the motivations for studying neural networks. In fact, the fundamentality of the neural network can be interpreted as the complex mappings within a network of single elements, neurons. Usually, the network structure of neurons forms an arbitrary acyclic graph and the function class of neural networks is parameterized. The learning algorithm is a method that chooses the number of parameters and the appropriate values for the parameters, As standard neural networks deal with the real vectors of a fixed dimension and most of the symbolic data are not structured form, it becoms necessary to encode the strucured data in a real vector. Similarly, in the encoding process an important structure may be hidden so that it is quite uneasy to avail the relavent information. In this monograph, the author tries to solve the problem that usually arises during the encoding process due to inappropriate or redundant encoding, which not only waste the space with superfluous informatin but slowdown the training process and the generalization ability of the network. Subsequently, the author correlates neural networks with the recurrent and folding networks. He has discussed the necessary conditions for the practical use of learning approach, and sugested that the recurrent or standard feed forward networks and probably approximately correct' learnability leaves some questions concerning the formalization of learnability unsolved. Out of total five chapters, Chapters 3-5 deals the verification of the above mentioned topics such as approximation ability, learnability, and the complexity of training, respectively.
0 references
tree Automata
0 references
noisy Computation
0 references
model free learning
0 references
model dependent learning
0 references
VC-dimension
0 references
neural networks
0 references
PAC learning
0 references
folding network
0 references