An online gradient method with momentum for two-layer feedforward neural networks
From MaRDI portal
Publication:1026325
DOI10.1016/j.amc.2009.02.038zbMath1187.68420OpenAlexW2030078509MaRDI QIDQ1026325
Publication date: 24 June 2009
Published in: Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.amc.2009.02.038
Related Items (5)
Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm ⋮ Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions ⋮ The Nesterov accelerated gradient algorithm for auto-regressive exogenous models with random lost measurements: interpolation method and auxiliary model method ⋮ Semistability of Steepest Descent with Momentum for Quadratic Functions ⋮ Training artificial neural networks by a hybrid PSO-CS algorithm
Cites Work
- Spectral projected subgradient with a momentum term for the Lagrangean dual approach
- Convergence of gradient method for Eelman networks
- Convergence of an online gradient method for feedforward neural networks with stochastic inputs.
- Training multilayer perceptrons via minimization of sum of ridge functions
- Deterministic convergence of an online gradient method for neural networks
- Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
- Stability analysis of a three-term backpropagation algorithm
- Learning representations by back-propagating errors
- Projected Gradient Methods for Nonnegative Matrix Factorization
This page was built for publication: An online gradient method with momentum for two-layer feedforward neural networks