Inexact coordinate descent: complexity and preconditioning
From MaRDI portal
(Redirected from Publication:306308)
convex optimizationpreconditioningnumerical experimentsiteration complexityblock coordinate descentconjugate gradientsinexact methods
Numerical mathematical programming methods (65K05) Convex programming (90C25) Complexity and performance of numerical algorithms (65Y20) Numerical computation of eigenvalues and eigenvectors of matrices (65F15) Preconditioners for iterative methods (65F08) Iterative numerical methods for linear systems (65F10)
Abstract: In this paper we consider the problem of minimizing a convex function using a randomized block coordinate descent method. One of the key steps at each iteration of the algorithm is determining the update to a block of variables. Existing algorithms assume that in order to compute the update, a particular subproblem is solved exactly. In his work we relax this requirement, and allow for the subproblem to be solved inexactly, leading to an inexact block coordinate descent method. Our approach incorporates the best known results for exact updates as a special case. Moreover, these theoretical guarantees are complemented by practical considerations: the use of iterative techniques to determine the update as well as the use of preconditioning for further acceleration.
Recommendations
- Efficiency of coordinate descent methods on huge-scale optimization problems
- A flexible coordinate descent method
- On the complexity analysis of randomized block-coordinate descent methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Coordinate descent with arbitrary sampling. I: Algorithms and complexity.
Cites work
- A box constrained gradient projection algorithm for compressed sensing
- A coordinate gradient descent method for nonsmooth separable minimization
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- A randomized Kaczmarz algorithm with exponential convergence
- Accelerated block-coordinate relaxation for regularized optimization
- An Interior Point Method for Block Angular Optimization
- Compressed sensing
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Efficient block-coordinate descent algorithms for the group Lasso
- Efficient serial and parallel coordinate descent methods for huge-scale truss topology design
- Exact matrix completion via convex optimization
- First-order methods of smooth convex optimization with inexact oracle
- Gradient methods for minimizing composite functions
- Inexact Preconditioned Conjugate Gradient Method with Inner-Outer Iteration
- Inexact block coordinate descent methods with application to non-negative matrix factorization
- Introductory lectures on convex optimization. A basic course.
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Methods of conjugate gradients for solving linear systems
- Numerical solution of saddle point problems
- On A Class of Limited Memory Preconditioners For Large Scale Linear Systems With Multiple Right-Hand Sides
- On the Nonasymptotic Convergence of Cyclic Coordinate Descent Methods
- On the convergence of inexact block coordinate descent methods for constrained optimization
- Parallel coordinate descent methods for big data optimization
- Parallel interior-point solver for structured linear programs
- Parallel stochastic gradient algorithms for large-scale matrix completion
- Paved with good intentions: analysis of a randomized block Kaczmarz method
- Quadratic regularizations in an interior-point method for primal block-angular problems
- Randomized methods for linear constraints: convergence rates and conditioning
- Rate Analysis of Inexact Dual First-Order Methods Application to Dual Decomposition
- Sparse Reconstruction by Separable Approximation
- Standardization and the group lasso penalty
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
- The University of Florida sparse matrix collection
- The self regulation problem as an inexact steepest descent method for multicriteria optimization
Cited in
(27)- On the complexity analysis of randomized block-coordinate descent methods
- An inexact coordinate descent method for the weighted \(l_{1}\)-regularized convex optimization problem
- Convergence analysis of inexact randomized iterative methods
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
- Accelerating block coordinate descent methods with identification strategies
- Blending learning and inference in conditional random fields
- On optimal probabilities in stochastic coordinate descent methods
- Inexact variable metric stochastic block-coordinate descent for regularized optimization
- On the complexity of parallel coordinate descent
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- A randomized nonmonotone block proximal gradient method for a class of structured nonlinear programming
- Accelerated, parallel, and proximal coordinate descent
- An inexact successive quadratic approximation method for L-1 regularized optimization
- A block coordinate variable metric forward-backward algorithm
- Unifying framework for accelerated randomized methods in convex optimization
- Distributed block coordinate descent for minimizing partially separable functions
- Computing locally injective mappings by advanced MIPS
- Zeroth-order regularized optimization (ZORO): approximately sparse gradients and adaptive sampling
- A Subspace Acceleration Method for Minimization Involving a Group Sparsity-Inducing Regularizer
- Schwarz iterative methods: infinite space splittings
- Parallel coordinate descent methods for big data optimization
- A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications
- A flexible coordinate descent method
- Computing the best approximation over the intersection of a polyhedral set and the doubly nonnegative cone
- Riemannian preconditioned coordinate descent for low multilinear rank approximation
- Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent
- Coordinate descent with arbitrary sampling. II: Expected separable overapproximation.
This page was built for publication: Inexact coordinate descent: complexity and preconditioning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q306308)