Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems

From MaRDI portal
Publication:2910875


DOI10.1137/100802001zbMath1257.90073MaRDI QIDQ2910875

No author found.

Publication date: 12 September 2012

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/100802001


65K05: Numerical mathematical programming methods

90C25: Convex programming


Related Items

A second-order method for strongly convex \(\ell _1\)-regularization problems, Parallel coordinate descent methods for big data optimization, On the optimal order of worst case complexity of direct search, On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems, Inexact coordinate descent: complexity and preconditioning, On optimal probabilities in stochastic coordinate descent methods, Empirical likelihood confidence tubes for functional parameters in plug-in estimation, On N. Z. Shor's three scientific ideas, Subgradient methods for huge-scale optimization problems, A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints, The 2-coordinate descent method for solving double-sided simplex constrained minimization problems, Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization, On the complexity analysis of randomized block-coordinate descent methods, Decomposable norm minimization with proximal-gradient homotopy algorithm, Minimizing finite sums with the stochastic average gradient, Iteration complexity analysis of block coordinate descent methods, Schwarz iterative methods: infinite space splittings, An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems, iPiasco: inertial proximal algorithm for strongly convex optimization, Subspace correction methods in algebraic multi-level frames, On the relation between the randomized extended Kaczmarz algorithm and coordinate descent, Solving norm constrained portfolio optimization via coordinate-wise descent algorithms, Stochastic accelerated alternating direction method of multipliers with importance sampling, A flexible coordinate descent method, Exact worst-case convergence rates of the proximal gradient method for composite convex minimization, Accelerated parallel and distributed algorithm using limited internal memory for nonnegative matrix factorization, A globally convergent algorithm for nonconvex optimization based on block coordinate update, Duality and nonlinear graph Laplacians, On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes, Generalization of a result of Fabian on the asymptotic normality of stochastic approximation, Linear convergence of the randomized sparse Kaczmarz method, Blocks of coordinates, stochastic programming, and markets, Stochastic block-coordinate gradient projection algorithms for submodular maximization, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs, Stochastic quasi-Fejér block-coordinate fixed point iterations with random sweeping. II: Mean-square and linear convergence, A derandomization approach to recovering bandlimited signals across a wide range of random sampling rates, Robust multicategory support vector machines using difference convex algorithm, Adaptively weighted large-margin angle-based classifiers, Multi-label Lagrangian support vector machine with random block coordinate descent method, Matrix completion under interval uncertainty, Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization, An optimal randomized incremental gradient method, Rate of convergence analysis of dual-based variables decomposition methods for strongly convex problems, Inexact variable metric stochastic block-coordinate descent for regularized optimization, A coordinate descent method for total variation minimization, A geometric probability randomized Kaczmarz method for large scale linear systems, Point process estimation with Mirror Prox algorithms, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods, An accelerated directional derivative method for smooth stochastic convex optimization, A stochastic homotopy tracking algorithm for parametric systems of nonlinear equations, On maximum residual block and two-step Gauss-Seidel algorithms for linear least-squares problems, A stochastic subspace approach to gradient-free optimization in high dimensions, Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences, Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization, Fastest rates for stochastic mirror descent methods, A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization, Sparse group fused Lasso for model segmentation: a hybrid approach, An integrated stochastic model and algorithm for constrained multi-item newsvendor problems by two-stage decision-making approach, On the convergence of a randomized block coordinate descent algorithm for a matrix least squares problem, Asynchronous networked aggregative games, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Linear support vector regression with linear constraints, Levenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equations, Cyclic coordinate descent in the Hölder smooth setting, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, On the convergence of a block-coordinate incremental gradient method, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Accelerated proximal envelopes: application to componentwise methods, On the computational efficiency of catalyst accelerated coordinate descent, On obtaining sparse semantic solutions for inverse problems, control, and neural network training, Using neural networks to accelerate the solution of the Boltzmann equation, Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems, Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis, An accelerated coordinate gradient descent algorithm for non-separable composite optimization, Oracle complexity separation in convex optimization, Phase-only transmit beampattern design for large phased array antennas with multi-point nulling, Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration, Extended randomized Kaczmarz method for sparse least squares and impulsive noise problems, Block layer decomposition schemes for training deep neural networks, Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem, Randomness and permutations in coordinate descent methods, On relaxed greedy randomized coordinate descent methods for solving large linear least-squares problems, Synchronous parallel block coordinate descent method for nonsmooth convex function minimization, Lower bounds for finding stationary points I, Efficient first-order methods for convex minimization: a constructive approach, Emergence of price-taking behavior, Optimization for deep learning: an overview, Primal-dual block-proximal splitting for a class of non-convex problems, Worst-case complexity of cyclic coordinate descent: \(O(n^2)\) gap with randomized version, Random batch methods (RBM) for interacting particle systems, On convergence rate of the randomized Gauss-Seidel method, Fully asynchronous stochastic coordinate descent: a tight lower bound on the parallelism achieving linear speedup, Asynchronous Lagrangian scenario decomposition, Accelerated directional search with non-Euclidean prox-structure, Markov chain block coordinate descent, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Variant of greedy randomized Kaczmarz for ridge regression, Randomized primal-dual proximal block coordinate updates, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Stochastic subspace correction methods and fault tolerance, Variational Image Regularization with Euler's Elastica Using a Discrete Gradient Scheme, An alternating minimization method for robust principal component analysis, Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone, On Solving Large-Scale Polynomial Convex Problems by Randomized First-Order Algorithms, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization, Cyclic Coordinate-Update Algorithms for Fixed-Point Problems: Analysis and Applications, Randomized Block Proximal Damped Newton Method for Composite Self-Concordant Minimization, Importance sampling in signal processing applications, Rows versus Columns: Randomized Kaczmarz or Gauss--Seidel for Ridge Regression, Faster convergence of a randomized coordinate descent method for linearly constrained optimization problems, Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping, Iteration Complexity of a Block Coordinate Gradient Descent Method for Convex Optimization, Direct Search Based on Probabilistic Descent, The Supporting Halfspace--Quadratic Programming Strategy for the Dual of the Best Approximation Problem, An introduction to continuous optimization for imaging, An acceleration procedure for optimal first-order methods, Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization, Worst case complexity of direct search under convexity, Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm, On the Convergence of Projected-Gradient Methods with Low-Rank Projections for Smooth Convex Minimization over Trace-Norm Balls and Related Problems, Convergence Analysis of Inexact Randomized Iterative Methods, On solving the densestk-subgraph problem on large graphs, A generic coordinate descent solver for non-smooth convex optimisation, Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity, Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems, Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter Problems, Block coordinate type methods for optimization and learning, A unified analysis of variational inequality methods: variance reduction, sampling, quantization, and coordinate descent, Block-cyclic stochastic coordinate descent for deep neural networks, Cyclic Coordinate Dual Averaging with Extrapolation, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Unified analysis of stochastic gradient methods for composite convex and smooth optimization, Parameter estimation in a 3‐parameter p‐star random graph model, Cluster‐based gradient method for stochastic optimal control problems with elliptic partial differential equation constraint, Adaptive coordinate sampling for stochastic primal–dual optimization, Block Policy Mirror Descent, Derivation of coordinate descent algorithms from optimal control theory, Stochastic mirror descent method for linear ill-posed problems in Banach spaces, Faster randomized block sparse Kaczmarz by averaging, Conjugate gradients acceleration of coordinate descent for linear systems, The method of randomized Bregman projections for stochastic feasibility problems, On the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex Optimization, Block mirror stochastic gradient method for stochastic optimization, Global optimization using random embeddings, Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions, First-order methods for convex optimization, Robust supervised learning with coordinate gradient descent, Random Coordinate Descent Methods for Nonseparable Composite Optimization, Adaptive Catalyst for Smooth Convex Optimization, Unnamed Item, Unnamed Item, Unnamed Item, Iterative positive thresholding algorithm for non-negative sparse optimization, A Randomized Nonmonotone Block Proximal Gradient Method for a Class of Structured Nonlinear Programming, Batched Stochastic Gradient Descent with Weighted Sampling, Avoiding Communication in Primal and Dual Block Coordinate Descent Methods, On the complexity of parallel coordinate descent, Optimization Methods for Large-Scale Machine Learning, A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Distributed Learning with Sparse Communications by Identification, A remark on accelerated block coordinate descent for computing the proximity operators of a sum of convex functions, Understanding large text corpora via sparse machine learning, Randomized Gradient Boosting Machine, Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms, On Adaptive Sketch-and-Project for Solving Linear Systems, Unnamed Item, Unnamed Item, Unnamed Item, Proximal Gradient Methods with Adaptive Subspace Sampling, Control analysis and design via randomised coordinate polynomial minimisation, Proximal Gradient Methods for Machine Learning and Imaging, A derivative-free affine scaling trust region methods based on probabilistic models with new nonmonotone line search technique for linear inequality constrained minimization without strict complementarity, Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes, Efficient numerical methods to solve sparse linear equations with application to PageRank, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks, On Synchronous, Asynchronous, and Randomized Best-Response Schemes for Stochastic Nash Games, On the Efficiency of Random Permutation for ADMM and Coordinate Descent, Analysis of the Block Coordinate Descent Method for Linear Ill-Posed Problems, Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory, Analyzing random permutations for cyclic coordinate descent, Convergence analysis of the Fast Subspace Descent method for convex optimization problems, Faster Kriging: Facing High-Dimensional Simulators, Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization, A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints, Two Symmetrized Coordinate Descent Methods Can Be $O(n^2)$ Times Slower Than the Randomized Version, Faster Randomized Block Kaczmarz Algorithms, Stochastic sub-sampled Newton method with variance reduction, CoordinateWise Descent Methods for Leading Eigenvalue Problem, Randomized and fault-tolerant method of subspace corrections, Block-proximal methods with spatially adapted acceleration, Feature selection method based on partial least squares and analysis of traditional Chinese medicine data, Generalized affine scaling algorithms for linear programming problems, Convergence analysis for Kaczmarz-type methods in a Hilbert space framework, Coordinate descent algorithms, On efficient randomized algorithms for finding the PageRank vector, On the efficiency of a randomized mirror descent algorithm in online optimization problems, On the convergence of inexact block coordinate descent methods for constrained optimization, Parallel block coordinate minimization with application to group regularized regression, Unsupervised learning of pharmacokinetic responses, On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions, Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems, Efficient block-coordinate descent algorithms for the group Lasso, Random gradient-free minimization of convex functions, Random block coordinate descent methods for linearly constrained optimization over networks, Nearly linear-time packing and covering LP solvers. Nearly linear-time packing and covering LP solvers, achieving width-independence and \(=(1/\varepsilon)\)-convergence, A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications, Accelerating block coordinate descent methods with identification strategies, An almost cyclic 2-coordinate descent method for singly linearly constrained problems, On the convergence of asynchronous parallel iteration with unbounded delays, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, A parallel line search subspace correction method for composite convex optimization, New method for solving Ivanov regularization-based support vector machine learning, On multi-step greedy randomized coordinate descent method for solving large linear least-squares problems, An Efficient Inexact ABCD Method for Least Squares Semidefinite Programming, Computational Methods for Solving Nonconvex Block-Separable Constrained Quadratic Problems, A second-order method for convex1-regularized optimization with active-set prediction, ARock: An Algorithmic Framework for Asynchronous Parallel Coordinate Updates, Coordinate descent with arbitrary sampling I: algorithms and complexity, Coordinate descent with arbitrary sampling II: expected separable overapproximation, On the convergence of the forward–backward splitting method with linesearches, Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent, Separable approximations and decomposition methods for the augmented Lagrangian, Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization, Asynchronous Stochastic Coordinate Descent: Parallelism and Convergence Properties, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, Efficiency of the Accelerated Coordinate Descent Method on Structured Optimization Problems, Introduction: Big data and partial differential equations, A Randomized Coordinate Descent Method with Volume Sampling, A Randomized Exchange Algorithm for Computing Optimal Approximate Designs of Experiments, A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization, Iterative Proportional Scaling Revisited: A Modern Optimization Perspective, Solving Fused Penalty Estimation Problems via Block Splitting Algorithms, Accelerated, Parallel, and Proximal Coordinate Descent, The Cyclic Block Conditional Gradient Method for Convex Optimization Problems, An accelerated randomized Kaczmarz algorithm, An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization, Convergence Properties of the Randomized Extended Gauss--Seidel and Kaczmarz Methods, A proximal block minimization method of multipliers with a substitution procedure, Distributed Block Coordinate Descent for Minimizing Partially Separable Functions, Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints, Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds