Pages that link to "Item:Q2873229"
From MaRDI portal
The following pages link to Augmented $\ell_1$ and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm (Q2873229):
Displaying 35 items.
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth (Q523179) (← links)
- Sparse + low-energy decomposition for viscous conservation laws (Q729181) (← links)
- Sparse recovery via differential inclusions (Q739470) (← links)
- Stability of the elastic net estimator (Q895982) (← links)
- A flexible ADMM algorithm for big data applications (Q1704789) (← links)
- Linear convergence of the randomized sparse Kaczmarz method (Q1717238) (← links)
- A new piecewise quadratic approximation approach for \(L_0\) norm minimization problem (Q1729942) (← links)
- An improved algorithm for basis pursuit problem and its applications (Q2009392) (← links)
- Low-rank matrix recovery via regularized nuclear norm minimization (Q2036488) (← links)
- Extragradient and extrapolation methods with generalized Bregman distances for saddle point problems (Q2157903) (← links)
- Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems (Q2168919) (← links)
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652) (← links)
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization (Q2355319) (← links)
- Projected shrinkage algorithm for box-constrained \(\ell _1\)-minimization (Q2361128) (← links)
- Local linear convergence of a primal-dual algorithm for the augmented convex models (Q2399239) (← links)
- Iterative methods based on soft thresholding of hierarchical tensors (Q2407677) (← links)
- On the convergence of asynchronous parallel iteration with unbounded delays (Q2422607) (← links)
- Proximal linearization methods for Schatten \(p\)-quasi-norm minimization (Q2678970) (← links)
- Variance reduction for root-finding problems (Q2689823) (← links)
- On the Convergence of Decentralized Gradient Descent (Q2821798) (← links)
- Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions (Q2821800) (← links)
- Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties (Q2954387) (← links)
- Eventual linear convergence of the Douglas-Rachford iteration for basis pursuit (Q3450036) (← links)
- Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions (Q4991666) (← links)
- Revisiting linearized Bregman iterations under Lipschitz-like convexity condition (Q5058656) (← links)
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization (Q5254995) (← links)
- Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning (Q5381126) (← links)
- A time continuation based fast approximate algorithm for compressed sensing related optimization (Q5501146) (← links)
- Regularized Kaczmarz Algorithms for Tensor Recovery (Q5860371) (← links)
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm (Q5962728) (← links)
- Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications (Q6056241) (← links)
- Sparse sampling Kaczmarz–Motzkin method with linear convergence (Q6139724) (← links)
- Cardinality minimization, constraints, and regularization: a survey (Q6585278) (← links)
- Optimality conditions and numerical algorithms for a class of linearly constrained minimax optimization problems (Q6601202) (← links)
- Acceleration and restart for the randomized Bregman-Kaczmarz method (Q6615435) (← links)