Nonconvex regularization for sparse neural networks
From MaRDI portal
Publication:2168678
DOI10.1016/j.acha.2022.05.003OpenAlexW4281606071MaRDI QIDQ2168678
Konstantin Pieper, Armenak Petrosyan
Publication date: 26 August 2022
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.11515
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Greedy function approximation: A gradient boosting machine.
- Nearly unbiased variable selection under minimax concave penalty
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Relaxation for a class of nonconvex functionals defined on measures
- CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Integral representation of nonconvex functionals defined on measures
- Harmonic analysis of neural networks
- Sparsest piecewise-linear regression of one-dimensional data
- The Barron space and the flow-induced function spaces for neural network models
- Transformed \(\ell_1\) regularization for learning sparse deep neural networks
- On the linear convergence rates of exchange and continuous methods for total variation minimization
- Neural network with unbounded activation functions is universal approximator
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Limiting Aspects of Nonconvex ${TV}^{\phi}$ Models
- Integral combinations of Heavisides
- New lower semicontinuity results for nonconvex functionals defined on measures
- Universal approximation bounds for superpositions of a sigmoidal function
- Hinging hyperplanes for regression, classification, and function approximation
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Inverse problems in spaces of measures
- The Gap between Theory and Practice in Function Approximation with Deep Neural Networks
- Linear convergence of accelerated conditional gradient algorithms in spaces of measures
- Breaking the Curse of Dimensionality with Convex Neural Networks
- ℓ1 Regularization in Infinite Dimensional Feature Spaces
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems
This page was built for publication: Nonconvex regularization for sparse neural networks