Primal-dual subgradient methods for convex problems

From MaRDI portal
Publication:116219

DOI10.1007/s10107-007-0149-xzbMath1191.90038OpenAlexW2169713291WikidataQ105583392 ScholiaQ105583392MaRDI QIDQ116219

Yurii Nesterov, Yu. E. Nesterov

Publication date: 19 June 2007

Published in: Mathematical Programming, Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-007-0149-x



Related Items

The Walrasian equilibrium and centralized distributed optimization in terms of modern convex optimization methods on the example of resource allocation problem, Efficient numerical methods to solve sparse linear equations with application to PageRank, Particle dual averaging: optimization of mean field neural network with global convergence rate analysis*, Stochastic approximation versus sample average approximation for Wasserstein barycenters, Discrete Choice Prox-Functions on the Simplex, Bayesian influence diagnostics using normalized functional Bregman divergence, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, Subgradient ellipsoid method for nonsmooth convex problems, Unifying mirror descent and dual averaging, An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis, Optimal anytime regret with two experts, An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Approximate Newton Policy Gradient Algorithms, Differentially private distributed online learning over time‐varying digraphs via dual averaging, Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks, Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization, New proximal bundle algorithm based on the gradient sampling method for nonsmooth nonconvex optimization with exact and inexact information, Learning equilibrium in bilateral bargaining games, Stochastic mirror descent method for linear ill-posed problems in Banach spaces, A graph-based decomposition method for convex quadratic optimization with indicators, Optimizing a multi-echelon location-inventory problem with joint replenishment: a Lipschitz \(\epsilon\)-optimal approach using Lagrangian relaxation, Distributed strategies for mixed equilibrium problems: continuous-time theoretical approaches, No-regret algorithms in on-line learning, games and convex optimization, A unified stochastic approximation framework for learning in games, Robust Accelerated Primal-Dual Methods for Computing Saddle Points, Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations, A STOCHASTIC APPROXIMATION ALGORITHM FOR STOCHASTIC SEMIDEFINITE PROGRAMMING, Stochastic incremental mirror descent algorithms with Nesterov smoothing, A randomized progressive hedging algorithm for stochastic variational inequality, Continuous time learning algorithms in optimization and game theory, First-order methods for convex optimization, A game-theory-based scheme to facilitate consensus latency minimization in sharding blockchain, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, An Inverse-Adjusted Best Response Algorithm for Nash Equilibria, Accelerated dual-averaging primal–dual method for composite convex minimization, Subgradient algorithms on Riemannian manifolds of lower bounded curvatures, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Optimization Methods for Large-Scale Machine Learning, Unnamed Item, Extragradient Method with Variance Reduction for Stochastic Variational Inequalities, An acceleration procedure for optimal first-order methods, New analysis and results for the Frank-Wolfe method, Essentials of numerical nonsmooth optimization, Abstract convergence theorem for quasi-convex optimization problems with applications, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Subsampling Algorithms for Semidefinite Programming, Scalable Semidefinite Programming, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, Unnamed Item, Essentials of numerical nonsmooth optimization, Inexact model: a framework for optimization and variational inequalities, On the robustness of learning in games with stochastically perturbed payoff observations, Optimum dimensional synthesis of planar mechanisms with geometric constraints, Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises, Replicator dynamics: old and new, Stochastic mirror descent dynamics and their convergence in monotone variational inequalities, A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, Inexact subgradient methods for quasi-convex optimization problems, OSGA: a fast subgradient algorithm with optimal complexity, New results on subgradient methods for strongly convex optimization problems with a unified analysis, MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization, Structured Sparsity: Discrete and Convex Approaches, Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization, A relax-and-cut framework for large-scale maximum weight connected subgraph problems, Hedge algorithm and dual averaging schemes, Inertial Game Dynamics and Applications to Constrained Optimization, A Level-Set Method for Convex Optimization with a Feasible Solution Path, Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case, Subgradient method for nonconvex nonsmooth optimization, Asymptotic optimality in stochastic optimization, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization, Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints, Some multivariate risk indicators: Minimization by using a Kiefer–Wolfowitz approach to the mirror stochastic algorithm, Proximal algorithms for multicomponent image recovery problems, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, Feature-aware regularization for sparse online learning, Aggregation of estimators and stochastic optimization, On the computational efficiency of subgradient methods: a case study with Lagrangian bounds, On the Convergence of Gradient-Like Flows with Noisy Gradient Input, Relatively Smooth Convex Optimization by First-Order Methods, and Applications, Pegasos: primal estimated sub-gradient solver for SVM, Barrier subgradient method, An optimal method for stochastic composite optimization, A partially inexact bundle method for convex semi-infinite minmax problems, Scale-free online learning, General Hölder smooth convergence rates follow from specialized rates assuming growth bounds, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, A sparsity preserving stochastic gradient methods for sparse regression, Learning in games with continuous action sets and unknown payoff functions, Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients, Inexact dual averaging method for distributed multi-agent optimization, The Approximate Duality Gap Technique: A Unified Theory of First-Order Methods, Large-scale unit commitment under uncertainty: an updated literature survey, String-averaging incremental stochastic subgradient algorithms, Dual subgradient algorithms for large-scale nonsmooth learning problems, Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs, Primal-dual methods for solving infinite-dimensional games, Universal gradient methods for convex optimization problems, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Saddle point mirror descent algorithm for the robust PageRank problem, Universal method for stochastic composite optimization problems, Minimizing finite sums with the stochastic average gradient, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), A continuous-time approach to online optimization, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Aggregate subgradient method for nonsmooth DC optimization, Incrementally updated gradient methods for constrained and regularized optimization, Ergodic, primal convergence in dual subgradient schemes for convex programming. II: The case of inconsistent primal problems, Distributed dual averaging method for multi-agent optimization with quantized communication, Optimization problems in statistical learning: duality and optimality conditions, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications, Gradient-free method for nonsmooth distributed optimization, Sample size selection in optimization methods for machine learning, Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches, Complexity bounds for primal-dual methods minimizing the model of objective function, lmls, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach, An improved Lagrangian relaxation and dual ascent approach to facility location problems, Faster algorithms for extensive-form game solving via improved smoothing functions, Distributed linear regression by averaging, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Distributed and consensus optimization for non-smooth image reconstruction, Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems, A Simple but Usually Fast Branch-and-Bound Algorithm for the Capacitated Facility Location Problem, A non-monotone conjugate subgradient type method for minimization of convex functions, Flexible Bayesian dynamic modeling of correlation and covariance matrices, Ensemble slice sampling. Parallel, black-box and gradient-free inference for correlated \& multimodal distributions, Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems, A family of subgradient-based methods for convex optimization problems in a unifying framework, Universal method of searching for equilibria and stochastic equilibria in transportation networks, A Subgradient Method for Free Material Design, Learning in Games via Reinforcement and Regularization, Subgradient methods for saddle-point problems, Make \(\ell_1\) regularization effective in training sparse CNN, Convergence rates of subgradient methods for quasi-convex optimization problems, A distributed Bregman forward-backward algorithm for a class of Nash equilibrium problems, Primal convergence from dual subgradient methods for convex optimization, Surplus-based accelerated algorithms for distributed optimization over directed networks, Unnamed Item, Unnamed Item, Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications, Learning in nonatomic games. I: Finite action spaces and population games, Sparse learning via Boolean relaxations, Quasi-monotone subgradient methods for nonsmooth convex minimization, Large-scale unit commitment under uncertainty, On efficient randomized algorithms for finding the PageRank vector, On the efficiency of a randomized mirror descent algorithm in online optimization problems



Cites Work