A sparsity preserving stochastic gradient methods for sparse regression
From MaRDI portal
(Redirected from Publication:457215)
Recommendations
- Sparse recovery by reduced variance stochastic approximation
- On stochastic accelerated gradient with convergence rate of regression learning
- scientific article; zbMATH DE number 6253925
- Dual averaging methods for regularized stochastic learning and online optimization
- Sparse online learning via truncated gradient
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 3612778 (Why is no real title available?)
- scientific article; zbMATH DE number 1005357 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Stochastic Approximation Method
- A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems
- Acceleration of Stochastic Approximation by Averaging
- An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
- An optimal method for stochastic composite optimization
- Asymptotic Distribution of Stochastic Approximation Procedures
- Dual averaging methods for regularized stochastic learning and online optimization
- Introductory lectures on convex optimization. A basic course.
- Manifold identification in dual averaging for regularized stochastic online learning
- Model Selection and Estimation in Regression with Grouped Variables
- On a Stochastic Approximation Method
- Online learning with samples drawn from non-identical distributions
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal distributed online prediction using mini-batches
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Primal-dual subgradient methods for convex problems
- Randomized smoothing for stochastic optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Smooth minimization of non-smooth functions
- Stochastic Estimation of the Maximum of a Regression Function
- Trace norm regularization: reformulations, algorithms, and multi-task learning
- stochastic quasigradient methods and their application to system optimization†
Cited in
(17)- Gradient flows and randomised thresholding: sparse inversion and classification
- On stochastic accelerated gradient with convergence rate of regression learning
- Sparse recovery by reduced variance stochastic approximation
- Stochastic forward-backward splitting for monotone inclusions
- A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
- On the convergence rate of sparse grid least squares regression
- A note on sparse least-squares regression
- Adaptive proximal SGD based on new estimating sequences for sparser ERM
- Online sparse identification for regression models
- A general framework for fast stagewise algorithms
- Max-affine regression via first-order methods
- ADMM Algorithmic Regularization Paths for Sparse Statistical Machine Learning
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- Sparse high-dimensional regression: exact scalable algorithms and phase transitions
- Convergence of stochastic proximal gradient algorithm
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
This page was built for publication: A sparsity preserving stochastic gradient methods for sparse regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q457215)