Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
From MaRDI portal
Publication:5962715
Abstract: We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
Recommendations
- Stochastic dual coordinate ascent methods for regularized loss minimization
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- Accelerated, parallel, and proximal coordinate descent
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Accelerated iterative regularization via dual diagonal descent
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Projection-Based Regularized Dual Averaging for Stochastic Optimization
- Dual coordinate ascent methods for non-strictly convex minimization
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
Cites work
- scientific article; zbMATH DE number 6253925 (Why is no real title available?)
- 10.1162/15324430260185628
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Accelerated, parallel, and proximal coordinate descent
- Dual averaging methods for regularized stochastic learning and online optimization
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Efficient online and batch learning using forward backward splitting
- Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks
- First-order methods of smooth convex optimization with inexact oracle
- Gradient methods for minimizing composite functions
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- On the dual formulation of regularized linear systems with convex risks
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Pegasos: primal estimated sub-gradient solver for SVM
- Smooth Optimization with Approximate Gradient
- Smooth minimization of non-smooth functions
- Sparse online learning via truncated gradient
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Trading accuracy for sparsity in optimization problems with sparsity constraints
- Understanding machine learning. From theory to algorithms
Cited in
(80)- Random gradient extrapolation for distributed and stochastic optimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
- On the complexity analysis of the primal solutions for the accelerated randomized dual coordinate ascent
- On the complexity analysis of randomized block-coordinate descent methods
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
- An optimal randomized incremental gradient method
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting
- Catalyst acceleration for first-order convex optimization: from theory to practice
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems
- On the adaptivity of stochastic gradient-based optimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- On data preconditioning for regularized loss minimization
- On optimal probabilities in stochastic coordinate descent methods
- Variance reduction for root-finding problems
- Dual coordinate descent methods for logistic regression and maximum entropy models
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- Top-\(k\) multi-class SVM using multiple features
- Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
- Dual averaging methods for regularized stochastic learning and online optimization
- Stochastic modified equations and dynamics of stochastic gradient algorithms. I: Mathematical foundations
- An accelerated directional derivative method for smooth stochastic convex optimization
- On faster convergence of cyclic block coordinate descent-type methods for strongly convex minimization
- scientific article; zbMATH DE number 7306858 (Why is no real title available?)
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- Kalman-based stochastic gradient method with stop condition and insensitivity to conditioning
- scientific article; zbMATH DE number 7255066 (Why is no real title available?)
- A unified formulation and fast accelerated proximal gradient method for classification
- Importance sampling in signal processing applications
- Accelerated methods for nonconvex optimization
- A randomized nonmonotone block proximal gradient method for a class of structured nonlinear programming
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization
- On the convergence of stochastic primal-dual hybrid gradient
- Accelerated, parallel, and proximal coordinate descent
- Bundle methods for regularized risk minimization
- Block-proximal methods with spatially adapted acceleration
- A general distributed dual coordinate optimization framework for regularized loss minimization
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Scaling up sparse support vector machines by simultaneous feature and sample reduction
- Stochastic primal dual fixed point method for composite optimization
- Linear coupling: an ultimate unification of gradient and mirror descent
- A generic coordinate descent solver for non-smooth convex optimisation
- Stochastic dual coordinate ascent methods for regularized loss minimization
- A new accelerated algorithm for ill-conditioned ridge regression problems
- scientific article; zbMATH DE number 6982986 (Why is no real title available?)
- Linear convergence of cyclic SAGA
- scientific article; zbMATH DE number 6982318 (Why is no real title available?)
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- Katyusha: the first direct acceleration of stochastic gradient methods
- An efficient primal dual prox method for non-smooth optimization
- Block mirror stochastic gradient method for stochastic optimization
- An extragradient-based alternating direction method for convex minimization
- Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
- A smooth inexact penalty reformulation of convex problems with linear constraints
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- The complexity of primal-dual fixed point methods for ridge regression
- Provable accelerated gradient method for nonconvex low rank optimization
- Second-order stochastic optimization for machine learning in linear time
- Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent
- An optimal high-order tensor method for convex optimization
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
- A stochastic algorithm with optimal convergence rate for strongly convex optimization problems
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Primal-dual block-proximal splitting for a class of non-convex problems
- A sequential dual method for the structured ramp loss minimization
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- An accelerated stochastic mirror descent method
- Active subspace of neural networks: structural analysis and universal attacks
- A safe double screening strategy for elastic net support vector machine
- An aggressive reduction on the complexity of optimization for non-strongly convex objectives
- scientific article; zbMATH DE number 7625168 (Why is no real title available?)
- Unifying framework for accelerated randomized methods in convex optimization
- Support vector machine in big data: smoothing strategy and adaptive distributed inference
- scientific article; zbMATH DE number 7415080 (Why is no real title available?)
- scientific article; zbMATH DE number 7370566 (Why is no real title available?)
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
This page was built for publication: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5962715)