Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
From MaRDI portal
Publication:6149503
Recommendations
- Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks
- Convergence analysis of online gradient method for BP neural networks
- Boundedness and convergence analysis of a pi-sigma neural network based on online gradient method and sparse optimization
- Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
- The convergence of stochastic gradient algorithms applied to learning in neural networks
Cites work
- scientific article; zbMATH DE number 51537 (Why is no real title available?)
- An online gradient method with momentum for two-layer feedforward neural networks
- Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
- Convergence analysis of online gradient method for BP neural networks
- Convergence of online gradient method with penalty for BP neural networks
- Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
- Dynamic properties and a new learning mechanism in higher order neural networks
- Gradient Convergence in Gradient methods with Errors
- Multilayer feedforward networks are universal approximators
- Training multilayer perceptrons via minimization of sum of ridge functions
This page was built for publication: Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6149503)