Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization

From MaRDI portal
Publication:5962715

DOI10.1007/s10107-014-0839-0zbMath1342.90103arXiv1309.2375OpenAlexW2118545728MaRDI QIDQ5962715

Tong Zhang, Shai Shalev-Shwartz

Publication date: 23 February 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1309.2375



Related Items

A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates, Active Subspace of Neural Networks: Structural Analysis and Universal Attacks, Accelerated, Parallel, and Proximal Coordinate Descent, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization, On optimal probabilities in stochastic coordinate descent methods, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Unnamed Item, Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness, Unnamed Item, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Accelerated Methods for NonConvex Optimization, The complexity of primal-dual fixed point methods for ridge regression, A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems, An aggressive reduction on the complexity of optimization for non-strongly convex objectives, Convergence rates of accelerated proximal gradient algorithms under independent noise, Linear convergence of cyclic SAGA, A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming, Unnamed Item, An accelerated variance reducing stochastic method with Douglas-Rachford splitting, Importance sampling in signal processing applications, Block mirror stochastic gradient method for stochastic optimization, A safe double screening strategy for elastic net support vector machine, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Variance reduction for root-finding problems, On the Adaptivity of Stochastic Gradient-Based Optimization, Unifying framework for accelerated randomized methods in convex optimization, Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems, Primal-dual block-proximal splitting for a class of non-convex problems, On the complexity analysis of randomized block-coordinate descent methods, A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Unnamed Item, An Optimal Algorithm for Decentralized Finite-Sum Optimization, On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, An extragradient-based alternating direction method for convex minimization, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Unnamed Item, Unnamed Item, Unnamed Item, An optimal randomized incremental gradient method, An accelerated directional derivative method for smooth stochastic convex optimization, Random Gradient Extrapolation for Distributed and Stochastic Optimization, A new accelerated algorithm for ill-conditioned ridge regression problems, A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods, Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training, Randomized smoothing variance reduction method for large-scale non-smooth convex optimization, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, Provable accelerated gradient method for nonconvex low rank optimization, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Block-proximal methods with spatially adapted acceleration, Stochastic primal dual fixed point method for composite optimization, Top-\(k\) multi-class SVM using multiple features, Distributed block-diagonal approximation methods for regularized empirical risk minimization, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Unnamed Item, Unnamed Item, Kalman-Based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, A generic coordinate descent solver for non-smooth convex optimisation, An Optimal High-Order Tensor Method for Convex Optimization


Uses Software


Cites Work