Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to L₁-minimization
From MaRDI portal
Publication:6535872
DOI10.1016/J.NEUNET.2024.106123MaRDI QIDQ6535872FDOQ6535872
Authors: Junpeng Xu, Xing He
Publication date: 5 March 2024
Published in: Neural Networks (Search for Journal in Brave)
Recommendations
- Neurodynamic approaches for sparse recovery problem with linear inequality constraints
- Smoothing inertial neurodynamic approach for sparse signal reconstruction via \(L_p\)-norm minimization
- A neurodynamic algorithm for sparse signal reconstruction with finite-time convergence
- Fixed-Time Stable Neurodynamic Flow to Sparse Signal Recovery via Nonconvex L1-β2-Norm
- Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing
Cites Work
- NESTA: A fast and accurate first-order method for sparse recovery
- Title not available (Why is that?)
- Decoding by Linear Programming
- Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
- Smoothing methods for nonsmooth, nonconvex minimization
- Projected Nesterov's Proximal-Gradient Algorithm for Sparse Signal Recovery
- Iteratively reweighted least squares minimization for sparse recovery
- Fast Distributed Gradient Methods
- Atomic decomposition by basis pursuit
- Primal-dual algorithm for distributed constrained optimization
- Distributed gradient algorithm for constrained optimization with application to load sharing in power systems
- Fast and Accurate Algorithms for Re-Weighted $\ell _{1}$-Norm Minimization
- Novel projection neurodynamic approaches for constrained convex optimization
- Dynamical Primal-Dual Nesterov Accelerated Method and Its Application to Network Optimization
- Distributed Continuous-Time Algorithm for Constrained Convex Optimizations via Nonsmooth Analysis Approach
- Exponential stability of nonlinear state-dependent delayed impulsive systems with applications
- Accelerated Distributed Nesterov Gradient Descent
- Input-to-state stability of impulsive reaction-diffusion neural networks with infinite distributed delays
- Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem
- Adaptive Exact Penalty Design for Constrained Distributed Optimization
- Sparse signal recovery by accelerated \(\ell_q\) \((0<q<1)\) thresholding algorithm
- Fixed-Time Stable Neurodynamic Flow to Sparse Signal Recovery via Nonconvex L1-β2-Norm
- Sparse signal reconstruction via recurrent neural networks with hyperbolic tangent function
Cited In (2)
This page was built for publication: Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to \(L_1\)-minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6535872)