Adaptive proximal SGD based on new estimating sequences for sparser ERM
From MaRDI portal
Publication:6196471
DOI10.1016/j.ins.2023.118965OpenAlexW4366439771MaRDI QIDQ6196471
Publication date: 14 March 2024
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ins.2023.118965
sparse solutionadaptive learning rateestimating sequences\(\ell_1\)-norm regularized ERMproximal stochastic gradient
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- On the complexity analysis of randomized block-coordinate descent methods
- Minimizing finite sums with the stochastic average gradient
- Pegasos: primal estimated sub-gradient solver for SVM
- Lectures on convex optimization
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- First-Order Methods for Sparse Covariance Selection
- Smooth Optimization Approach for Sparse Covariance Selection
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Understanding Machine Learning
- A Stochastic Approximation Method
This page was built for publication: Adaptive proximal SGD based on new estimating sequences for sparser ERM