On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent
From MaRDI portal
Publication:4969070
zbMath1502.90129arXiv1807.00261MaRDI QIDQ4969070
Publication date: 5 October 2020
Full work available at URL: https://arxiv.org/abs/1807.00261
iteration complexitydual first-order methodsaccelerated randomized dual coordinate ascentprimal solutionsrestart at any period
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Stochastic programming (90C15)
Related Items (3)
Accelerated primal-dual methods with adaptive parameters for composite convex optimization with linear constraints ⋮ An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization ⋮ A generic coordinate descent solver for non-smooth convex optimisation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimized first-order methods for smooth convex minimization
- Nonsmooth optimization via quasi-Newton methods
- On the complexity analysis of randomized block-coordinate descent methods
- Introductory lectures on convex optimization. A basic course.
- A fast dual proximal gradient algorithm for convex minimization and applications
- From error bounds to the complexity of first-order descent methods for convex functions
- An optimal randomized incremental gradient method
- Global error bounds for piecewise convex polynomials
- Accelerated linearized Bregman method
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Adaptive restart for accelerated gradient schemes
- Linear convergence of first order methods for non-strongly convex optimization
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Iteration complexity analysis of dual first-order methods for conic convex programming
- Convergence Analysis of Approximate Primal Solutions in Dual First-Order Methods
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Double Smoothing Technique for Large-Scale Linearly Constrained Convex Optimization
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
- An Accelerated Dual Gradient-Projection Algorithm for Embedded Linear Model Predictive Control
- Dual Ascent Methods for Problems with Strictly Convex Costs and Linear Constraints: A Unified Approach
- Accelerated, Parallel, and Proximal Coordinate Descent
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Error Bounds for Convex Polynomials
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- 10.1162/153244302760200704
- Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- Convergence Rate Analysis of Several Splitting Schemes
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stable signal recovery from incomplete and inaccurate measurements
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent