Adaptive proximal SGD based on new estimating sequences for sparser ERM
From MaRDI portal
Publication:6196471
Cites work
- scientific article; zbMATH DE number 7370630 (Why is no real title available?)
- scientific article; zbMATH DE number 7246283 (Why is no real title available?)
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Stochastic Approximation Method
- A proximal stochastic gradient method with progressive variance reduction
- A sparsity preserving stochastic gradient methods for sparse regression
- Adaptive subgradient methods for online learning and stochastic optimization
- An optimal method for stochastic composite optimization
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- First-Order Methods for Sparse Covariance Selection
- Gradient methods for minimizing composite functions
- Katyusha: the first direct acceleration of stochastic gradient methods
- Lectures on convex optimization
- Minimizing finite sums with the stochastic average gradient
- On the complexity analysis of randomized block-coordinate descent methods
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimization methods for large-scale machine learning
- Pegasos: primal estimated sub-gradient solver for SVM
- Smooth Optimization Approach for Sparse Covariance Selection
- The Adaptive Lasso and Its Oracle Properties
- Understanding machine learning. From theory to algorithms
This page was built for publication: Adaptive proximal SGD based on new estimating sequences for sparser ERM
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6196471)