Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
From MaRDI portal
Publication:470178
DOI10.1016/j.neunet.2013.11.006zbMath1298.68233OpenAlexW2088350412WikidataQ50706014 ScholiaQ50706014MaRDI QIDQ470178
Jacek M. Zurada, Wei Wu, Dakun Yang, Qinwei Fan, Yan Liu, Ji'an Wang
Publication date: 12 November 2014
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2013.11.006
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Related Items (7)
Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network ⋮ Convergence analysis of an augmented algorithm for fully complex-valued neural networks ⋮ The convergence analysis of spikeprop algorithm with smoothing \(L_{1/2}\) regularization ⋮ A New Smoothing Approach for Piecewise Smooth Functions: Application to Some Fundamental Functions ⋮ Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions ⋮ A backpropagation learning algorithm with graph regularization for feedforward neural networks ⋮ Modeling of complex dynamic systems using differential neural networks with the incorporation of a priori knowledge
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse SAR imaging based on \(L_{1/2}\) regularization
- A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization
- Improve robustness of sparse PCA by \(L_{1}\)-norm maximization
- A modified gradient-based neuro-fuzzy learning algorithm and its convergence
- Estimating the dimension of a model
- Neural networks in optimization
- Convergence Analysis of Three Classes of Split-Complex Gradient Algorithms for Complex-Valued Recurrent Neural Networks
- A Penalty-Function Approach for Pruning Feedforward Neural Networks
- Competitive Layer Model of Discrete-Time Recurrent Neural Networks with LT Neurons
- Generalized Neural Network for Nonsmooth Nonlinear Programming Problems
- Global Convergence Rate of Recurrently Connected Neural Networks
- Some Comments on C P
- Minimization algorithms based on supervisor and searcher cooperation
This page was built for publication: Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks