Coordinate-friendly structures, algorithms and applications
From MaRDI portal
Abstract: This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize. The great performance of coordinate update methods depends on solving simple sub-problems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates. Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.
Recommendations
- Coordinate descent algorithms
- Parallel coordinate descent methods for big data optimization
- Cyclic coordinate-update algorithms for fixed-point problems: analysis and applications
- Efficiency of coordinate descent methods on huge-scale optimization problems
- Coordinate descent with arbitrary sampling. I: Algorithms and complexity.
Cited in
(21)- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs
- Hybrid Jacobian and Gauss-Seidel proximal block coordinate update methods for linearly constrained convex programming
- ARock: an algorithmic framework for asynchronous parallel coordinate updates
- On unbounded delays in asynchronous parallel fixed-point algorithms
- Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization
- Randomized primal-dual proximal block coordinate updates
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- Cyclic coordinate-update algorithms for fixed-point problems: analysis and applications
- Coordinatewise descent methods for leading eigenvalue problem
- Off-diagonal symmetric nonnegative matrix factorization
- Block-proximal methods with spatially adapted acceleration
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- A new primal-dual algorithm for minimizing the sum of three functions with a linear operator
- A three-operator splitting scheme and its optimization applications
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Coordinate descent algorithms
- Markov chain block coordinate descent
- MiKM: multi-step inertial Krasnosel'skiǐ-Mann algorithm and its applications
- GAITA: a Gauss-Seidel iterative thresholding algorithm for _q regularized least squares regression
- Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers
This page was built for publication: Coordinate-friendly structures, algorithms and applications
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1690621)