Batch gradient method with smoothing L₁/2 regularization for training of feedforward neural networks
From MaRDI portal
Publication:470178
Recommendations
- \(L_{1/2}\) regularization methods for weights sparsification of neural networks
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- Make _1 regularization effective in training sparse CNN
- A Penalty-Function Approach for Pruning Feedforward Neural Networks
- The convergence analysis of spikeprop algorithm with smoothing \(L_{1/2}\) regularization
Cites work
- scientific article; zbMATH DE number 51537 (Why is no real title available?)
- scientific article; zbMATH DE number 3444596 (Why is no real title available?)
- scientific article; zbMATH DE number 1916734 (Why is no real title available?)
- A Penalty-Function Approach for Pruning Feedforward Neural Networks
- A modified gradient-based neuro-fuzzy learning algorithm and its convergence
- A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization
- Competitive Layer Model of Discrete-Time Recurrent Neural Networks with LT Neurons
- Convergence analysis of three classes of Split-complex gradient algorithms for complex-valued recurrent neural networks
- Estimating the dimension of a model
- Generalized Neural Network for Nonsmooth Nonlinear Programming Problems
- Global Convergence Rate of Recurrently Connected Neural Networks
- Improve robustness of sparse PCA by \(L_{1}\)-norm maximization
- Minimization algorithms based on supervisor and searcher cooperation
- Neural networks in optimization
- Some Comments on C P
- Sparse SAR imaging based on \(L_{1/2}\) regularization
Cited in
(10)- A New Smoothing Approach for Piecewise Smooth Functions: Application to Some Fundamental Functions
- Convergence analyses on sparse feedforward neural networks via group lasso regularization
- Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions
- Deterministic convergence analysis via smoothing group Lasso regularization and adaptive momentum for Sigma-Pi-Sigma neural network
- Convergence analysis of an augmented algorithm for fully complex-valued neural networks
- Convergence analysis of the batch gradient-based neuro-fuzzy learning algorithm with smoothing \(L_{1/2}\) regularization for the first-order Takagi-Sugeno system
- \(L_{1/2}\) regularization methods for weights sparsification of neural networks
- A backpropagation learning algorithm with graph regularization for feedforward neural networks
- The convergence analysis of spikeprop algorithm with smoothing \(L_{1/2}\) regularization
- Modeling of complex dynamic systems using differential neural networks with the incorporation of a priori knowledge
This page was built for publication: Batch gradient method with smoothing \(L_{1/2}\) regularization for training of feedforward neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q470178)