Transformed \(\ell_1\) regularization for learning sparse deep neural networks
From MaRDI portal
Publication:2185659
DOI10.1016/J.NEUNET.2019.08.015zbMath1434.68512DBLPjournals/nn/MaMNZ19arXiv1901.01021OpenAlexW2970738028WikidataQ93198988 ScholiaQ93198988MaRDI QIDQ2185659
Publication date: 5 June 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1901.01021
Related Items (5)
On obtaining sparse semantic solutions for inverse problems, control, and neural network training ⋮ GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee ⋮ Nonconvex regularization for sparse neural networks ⋮ Consistent Sparse Deep Learning: Theory and Computation ⋮ A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- Computing sparse representation in a highly coherent dictionary based on difference of \(L_1\) and \(L_2\)
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Minimization of transformed \(L_1\) penalty: theory, difference of convex function algorithm, and robust application in compressed sensing
- Minimization of transformed \(l_1\) penalty: closed form representation and iterative thresholding algorithms
- Transformed Schatten-1 iterative thresholding algorithms for low rank matrix completion
- A Method for Finding Structured Sparse Solutions to Nonnegative Least Squares Problems with Applications
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Deep Learning: Methods and Applications
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Local Strong Homogeneity of a Regularized Estimator
- Click Prediction for Web Image Reranking Using Multimodal Sparse Coding
- Multi-Modal Curriculum Learning for Semi-Supervised Image Classification
- Sparse Approximate Solutions to Linear Systems
- Minimization of $\ell_{1-2}$ for Compressed Sensing
- Model Selection and Estimation in Regression with Grouped Variables
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
This page was built for publication: Transformed \(\ell_1\) regularization for learning sparse deep neural networks