Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems

From MaRDI portal
Revision as of 20:12, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2910875

DOI10.1137/100802001zbMath1257.90073OpenAlexW2095984592MaRDI QIDQ2910875

No author found.

Publication date: 12 September 2012

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1137/100802001




Related Items (only showing first 100 items - show all)

Accelerated Bregman Primal-Dual Methods Applied to Optimal Transport and Wasserstein Barycenter ProblemsBlock coordinate type methods for optimization and learningCyclic Coordinate-Update Algorithms for Fixed-Point Problems: Analysis and ApplicationsA unified analysis of variational inequality methods: variance reduction, sampling, quantization, and coordinate descentBlock-cyclic stochastic coordinate descent for deep neural networksCyclic Coordinate Dual Averaging with ExtrapolationFactor-\(\sqrt{2}\) acceleration of accelerated gradient methodsUnified analysis of stochastic gradient methods for composite convex and smooth optimizationParameter estimation in a 3‐parameter p‐star random graph modelCluster‐based gradient method for stochastic optimal control problems with elliptic partial differential equation constraintAdaptive coordinate sampling for stochastic primal–dual optimizationBlock Policy Mirror DescentDerivation of coordinate descent algorithms from optimal control theoryStochastic mirror descent method for linear ill-posed problems in Banach spacesFaster randomized block sparse Kaczmarz by averagingRandomized Block Proximal Damped Newton Method for Composite Self-Concordant MinimizationConjugate gradients acceleration of coordinate descent for linear systemsImportance sampling in signal processing applicationsThe method of randomized Bregman projections for stochastic feasibility problemsOn the Global Convergence of Randomized Coordinate Gradient Descent for Nonconvex OptimizationBlock mirror stochastic gradient method for stochastic optimizationGlobal optimization using random embeddingsConvergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functionsFirst-order methods for convex optimizationRobust supervised learning with coordinate gradient descentRandom Coordinate Descent Methods for Nonseparable Composite OptimizationLocal linear convergence of proximal coordinate descent algorithmA bicomposition of conical projectionsUnifying framework for accelerated randomized methods in convex optimizationRows versus Columns: Randomized Kaczmarz or Gauss--Seidel for Ridge RegressionFaster convergence of a randomized coordinate descent method for linearly constrained optimization problemsAdaptive Catalyst for Smooth Convex OptimizationAn introduction to continuous optimization for imagingAn acceleration procedure for optimal first-order methodsLaplacian-based semi-supervised learning in multilayer hypergraphs by coordinate descentCoordinate descent methods beyond smoothness and separabilityAccelerated proximal stochastic dual coordinate ascent for regularized loss minimizationWorst case complexity of direct search under convexityStochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithmAn Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationStochastic subspace correction methods and fault toleranceVariational Image Regularization with Euler's Elastica Using a Discrete Gradient SchemeAn alternating minimization method for robust principal component analysisComputing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative ConeStochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random SweepingIteration Complexity of a Block Coordinate Gradient Descent Method for Convex OptimizationDirect Search Based on Probabilistic DescentThe Supporting Halfspace--Quadratic Programming Strategy for the Dual of the Best Approximation ProblemOn the Convergence of Projected-Gradient Methods with Low-Rank Projections for Smooth Convex Minimization over Trace-Norm Balls and Related ProblemsOn Solving Large-Scale Polynomial Convex Problems by Randomized First-Order AlgorithmsConvergence Analysis of Inexact Randomized Iterative MethodsOn solving the densestk-subgraph problem on large graphsA generic coordinate descent solver for non-smooth convex optimisationBregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient ContinuityLinear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex ProblemsExact Worst-Case Performance of First-Order Methods for Composite Convex OptimizationOn the optimal order of worst case complexity of direct searchAn attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problemsSolving norm constrained portfolio optimization via coordinate-wise descent algorithmsOn the rate of convergence of the proximal alternating linearized minimization algorithm for convex problemsStochastic accelerated alternating direction method of multipliers with importance samplingOn obtaining sparse semantic solutions for inverse problems, control, and neural network trainingUsing neural networks to accelerate the solution of the Boltzmann equationBlock-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problemsParallel random block-coordinate forward-backward algorithm: a unified convergence analysisAn accelerated coordinate gradient descent algorithm for non-separable composite optimizationOracle complexity separation in convex optimizationInexact coordinate descent: complexity and preconditioningA flexible coordinate descent methodOn optimal probabilities in stochastic coordinate descent methodsPhase-only transmit beampattern design for large phased array antennas with multi-point nullingEmpirical likelihood confidence tubes for functional parameters in plug-in estimationSampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and accelerationExtended randomized Kaczmarz method for sparse least squares and impulsive noise problemsExact worst-case convergence rates of the proximal gradient method for composite convex minimizationBlock layer decomposition schemes for training deep neural networksAccelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problemAccelerated parallel and distributed algorithm using limited internal memory for nonnegative matrix factorizationOn N. Z. Shor's three scientific ideasA globally convergent algorithm for nonconvex optimization based on block coordinate updateiPiasco: inertial proximal algorithm for strongly convex optimizationSubgradient methods for huge-scale optimization problemsRandomness and permutations in coordinate descent methodsOn relaxed greedy randomized coordinate descent methods for solving large linear least-squares problemsSubspace correction methods in algebraic multi-level framesDuality and nonlinear graph LaplaciansSynchronous parallel block coordinate descent method for nonsmooth convex function minimizationOn the relation between the randomized extended Kaczmarz algorithm and coordinate descentLower bounds for finding stationary points IEfficient first-order methods for convex minimization: a constructive approachEmergence of price-taking behaviorOn stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemesGeneralization of a result of Fabian on the asymptotic normality of stochastic approximationLinear convergence of the randomized sparse Kaczmarz methodA random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraintsOptimization for deep learning: an overviewThe 2-coordinate descent method for solving double-sided simplex constrained minimization problemsPrimal-dual block-proximal splitting for a class of non-convex problemsBlocks of coordinates, stochastic programming, and marketsStochastic block-coordinate gradient projection algorithms for submodular maximization







This page was built for publication: Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems