Batch gradient method with smoothing L₁/2 regularization for training of feedforward neural networks
DOI10.1016/J.NEUNET.2013.11.006zbMATH Open1298.68233DBLPjournals/nn/WuFZWYL14OpenAlexW2088350412WikidataQ50706014 ScholiaQ50706014MaRDI QIDQ470178FDOQ470178
Authors: Wei Wu, Qinwei Fan, Jacek M. Zurada, Dakun Yang, Jian Wang, Yan Liu
Publication date: 12 November 2014
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2013.11.006
Recommendations
- \(L_{1/2}\) regularization methods for weights sparsification of neural networks
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- Make \(\ell_1\) regularization effective in training sparse CNN
- A Penalty-Function Approach for Pruning Feedforward Neural Networks
- The convergence analysis of spikeprop algorithm with smoothing \(L_{1/2}\) regularization
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Estimating the dimension of a model
- Title not available (Why is that?)
- Some Comments on C P
- Title not available (Why is that?)
- Improve robustness of sparse PCA by \(L_{1}\)-norm maximization
- A Penalty-Function Approach for Pruning Feedforward Neural Networks
- A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization
- Generalized Neural Network for Nonsmooth Nonlinear Programming Problems
- Sparse SAR imaging based on \(L_{1/2}\) regularization
- Neural networks in optimization
- Convergence analysis of three classes of Split-complex gradient algorithms for complex-valued recurrent neural networks
- Competitive Layer Model of Discrete-Time Recurrent Neural Networks with LT Neurons
- Title not available (Why is that?)
- Global Convergence Rate of Recurrently Connected Neural Networks
- Minimization algorithms based on supervisor and searcher cooperation
- A modified gradient-based neuro-fuzzy learning algorithm and its convergence
Cited In (10)
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
- Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
- Convergence analysis of an augmented algorithm for fully complex-valued neural networks
- Convergence analysis of the batch gradient-based neuro-fuzzy learning algorithm with smoothing \(L_{1/2}\) regularization for the first-order Takagi-Sugeno system
- \(L_{1/2}\) regularization methods for weights sparsification of neural networks
- A backpropagation learning algorithm with graph regularization for feedforward neural networks
- The convergence analysis of spikeprop algorithm with smoothing \(L_{1/2}\) regularization
- Modeling of complex dynamic systems using differential neural networks with the incorporation of a priori knowledge
- A New Smoothing Approach for Piecewise Smooth Functions: Application to Some Fundamental Functions
This page was built for publication: Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q470178)