Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
DOI10.1007/S10107-014-0839-0zbMATH Open1342.90103arXiv1309.2375OpenAlexW2118545728MaRDI QIDQ5962715FDOQ5962715
Authors: Shai Shalev-Shwartz, Tong Zhang
Publication date: 23 February 2016
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.2375
Recommendations
- Stochastic dual coordinate ascent methods for regularized loss minimization
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- Accelerated, parallel, and proximal coordinate descent
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Accelerated iterative regularization via dual diagonal descent
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Projection-Based Regularized Dual Averaging for Stochastic Optimization
- Dual coordinate ascent methods for non-strictly convex minimization
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Pegasos: primal estimated sub-gradient solver for SVM
- Smooth minimization of non-smooth functions
- Gradient methods for minimizing composite functions
- Understanding machine learning. From theory to algorithms
- Smooth Optimization with Approximate Gradient
- First-order methods of smooth convex optimization with inexact oracle
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Dual averaging methods for regularized stochastic learning and online optimization
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Accelerated, parallel, and proximal coordinate descent
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Title not available (Why is that?)
- Sparse online learning via truncated gradient
- Trading accuracy for sparsity in optimization problems with sparsity constraints
- 10.1162/15324430260185628
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Efficient online and batch learning using forward backward splitting
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks
- On the dual formulation of regularized linear systems with convex risks
Cited In (80)
- Title not available (Why is that?)
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- An accelerated stochastic mirror descent method
- An aggressive reduction on the complexity of optimization for non-strongly convex objectives
- Title not available (Why is that?)
- A safe double screening strategy for elastic net support vector machine
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- Unifying framework for accelerated randomized methods in convex optimization
- Support vector machine in big data: smoothing strategy and adaptive distributed inference
- Title not available (Why is that?)
- Active subspace of neural networks: structural analysis and universal attacks
- A sequential dual method for the structured ramp loss minimization
- An optimal high-order tensor method for convex optimization
- Title not available (Why is that?)
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- On optimal probabilities in stochastic coordinate descent methods
- Linear coupling: an ultimate unification of gradient and mirror descent
- Primal-dual block-proximal splitting for a class of non-convex problems
- A generic coordinate descent solver for non-smooth convex optimisation
- On faster convergence of cyclic block coordinate descent-type methods for strongly convex minimization
- Dual coordinate descent methods for logistic regression and maximum entropy models
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Title not available (Why is that?)
- Second-order stochastic optimization for machine learning in linear time
- On data preconditioning for regularized loss minimization
- A unified formulation and fast accelerated proximal gradient method for classification
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- Catalyst acceleration for first-order convex optimization: from theory to practice
- Accelerated methods for nonconvex optimization
- A new accelerated algorithm for ill-conditioned ridge regression problems
- A smooth inexact penalty reformulation of convex problems with linear constraints
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- An optimal randomized incremental gradient method
- Stochastic primal dual fixed point method for composite optimization
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
- An accelerated directional derivative method for smooth stochastic convex optimization
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Accelerated, parallel, and proximal coordinate descent
- Provable accelerated gradient method for nonconvex low rank optimization
- Block-proximal methods with spatially adapted acceleration
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Random gradient extrapolation for distributed and stochastic optimization
- On the complexity analysis of randomized block-coordinate descent methods
- On the convergence of stochastic primal-dual hybrid gradient
- Importance sampling in signal processing applications
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Block mirror stochastic gradient method for stochastic optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- A randomized nonmonotone block proximal gradient method for a class of structured nonlinear programming
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- The complexity of primal-dual fixed point methods for ridge regression
- Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning
- An efficient primal dual prox method for non-smooth optimization
- Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
- On the adaptivity of stochastic gradient-based optimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Stochastic modified equations and dynamics of stochastic gradient algorithms. I: Mathematical foundations
- Katyusha: the first direct acceleration of stochastic gradient methods
- Bundle methods for regularized risk minimization
- Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- Top-\(k\) multi-class SVM using multiple features
- A stochastic algorithm with optimal convergence rate for strongly convex optimization problems
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Variance reduction for root-finding problems
- Linear convergence of cyclic SAGA
- Title not available (Why is that?)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Title not available (Why is that?)
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Scaling up sparse support vector machines by simultaneous feature and sample reduction
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- An extragradient-based alternating direction method for convex minimization
- Title not available (Why is that?)
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
Uses Software
This page was built for publication: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5962715)