Primal-dual subgradient methods for convex problems
From MaRDI portal
Recommendations
- Barrier subgradient method
- A class of convergent primal-dual subgradient algorithms for decomposable convex programs
- Quasi-monotone subgradient methods for nonsmooth convex minimization
- Gradient methods for minimizing composite functions
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
Cites work
- scientific article; zbMATH DE number 858900 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- scientific article; zbMATH DE number 3282977 (Why is no real title available?)
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Interior projection-like methods for monotone variational inequalities
- Introductory lectures on convex optimization. A basic course.
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Smooth minimization of non-smooth functions
- The ordered subsets mirror descent optimization method with applications to tomography
- Two ``well-known properties of subgradient optimization
Cited in
(only showing first 100 items - show all)- Asymptotic optimality in stochastic optimization
- Universal gradient methods for convex optimization problems
- On the convergence of gradient-like flows with noisy gradient input
- Primal-dual methods for solving infinite-dimensional games
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- Barrier subgradient method
- Optimizing a multi-echelon location-inventory problem with joint replenishment: a Lipschitz \(\epsilon\)-optimal approach using Lagrangian relaxation
- Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\)
- Saddle point mirror descent algorithm for the robust PageRank problem
- Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods
- Quasi-monotone subgradient methods for nonsmooth convex minimization
- A subgradient method based on gradient sampling for solving convex optimization problems
- An optimal method for stochastic composite optimization
- On efficient randomized algorithms for finding the PageRank vector
- Learning in games via reinforcement and regularization
- Relatively smooth convex optimization by first-order methods, and applications
- Primal convergence from dual subgradient methods for convex optimization
- Pegasos: primal estimated sub-gradient solver for SVM
- New analysis and results for the Frank-Wolfe method
- Optimization methods for large-scale machine learning
- Convergence rates of subgradient methods for quasi-convex optimization problems
- Make \(\ell_1\) regularization effective in training sparse CNN
- A family of subgradient-based methods for convex optimization problems in a unifying framework
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
- Sample size selection in optimization methods for machine learning
- An improved Lagrangian relaxation and dual ascent approach to facility location problems
- Primal and dual predicted decrease approximation methods
- Subgradient method for nonconvex nonsmooth optimization
- Gradient-free method for nonsmooth distributed optimization
- Inertial game dynamics and applications to constrained optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- Some multivariate risk indicators: minimization by using a Kiefer-Wolfowitz approach to the mirror stochastic algorithm
- The volume algorithm: Producing primal solutions with a subgradient method
- Large-scale unit commitment under uncertainty
- A survey of algorithms and analysis for adaptive online learning
- A partially inexact bundle method for convex semi-infinite minmax problems
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- Primal-Dual Combinatorial Relaxation Algorithms for the Maximum Degree of Subdeterminants
- Linear coupling: an ultimate unification of gradient and mirror descent
- An algebraic theory for primal and dual substructuring methods by constraints
- OSGA: a fast subgradient algorithm with optimal complexity
- On the robustness of learning in games with stochastically perturbed payoff observations
- Distributed dual averaging method for multi-agent optimization with quantized communication
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- A class of convergent primal-dual subgradient algorithms for decomposable convex programs
- First-order methods for convex optimization
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Universal method for stochastic composite optimization problems
- Minimizing finite sums with the stochastic average gradient
- Optimization problems in statistical learning: duality and optimality conditions
- On the convergence of mirror descent beyond stochastic convex programming
- A continuous-time approach to online optimization
- Learning in games with continuous action sets and unknown payoff functions
- lmls
- Ergodic, primal convergence in dual subgradient schemes for convex programming. II: The case of inconsistent primal problems
- A simple but usually fast branch-and-bound algorithm for the capacitated facility location problem
- Non-Euclidean restricted memory level method for large-scale convex optimization
- scientific article; zbMATH DE number 7415104 (Why is no real title available?)
- Subsampling algorithms for semidefinite programming
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks
- Incrementally updated gradient methods for constrained and regularized optimization
- On the computational efficiency of subgradient methods: a case study with Lagrangian bounds
- Subgradient methods for saddle-point problems
- Inexact subgradient methods for quasi-convex optimization problems
- Primal-dual subgradient method for huge-scale linear conic problems
- Inexact dual averaging method for distributed multi-agent optimization
- Duality between subgradient and conditional gradient methods
- A distributed Bregman forward-backward algorithm for a class of Nash equilibrium problems
- A level-set method for convex optimization with a feasible solution path
- Ensemble slice sampling. Parallel, black-box and gradient-free inference for correlated \& multimodal distributions
- The approximate duality gap technique: a unified theory of first-order methods
- Large-scale unit commitment under uncertainty: an updated literature survey
- Distributed linear regression by averaging
- Scale-free online learning
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis
- Replicator dynamics: old and new
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Primal dual methods for Wasserstein gradient flows
- Discrete choice prox-functions on the simplex
- Scalable semidefinite programming
- A primal-dual approach to inexact subgradient methods
- Inexact model: a framework for optimization and variational inequalities
- Flexible Bayesian dynamic modeling of correlation and covariance matrices
- Unifying mirror descent and dual averaging
- Primal–dual exterior point method for convex optimization
- An inverse-adjusted best response algorithm for Nash equilibria
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Complexity bounds for primal-dual methods minimizing the model of objective function
- A relax-and-cut framework for large-scale maximum weight connected subgraph problems
- Iteratively reweighted \(\ell _1\) algorithms with extrapolation
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- MAGMA: multilevel accelerated gradient mirror descent algorithm for large-scale convex composite minimization
- Universal method of searching for equilibria and stochastic equilibria in transportation networks
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises
- Feature-aware regularization for sparse online learning
- Subgradient algorithms on Riemannian manifolds of lower bounded curvatures
This page was built for publication: Primal-dual subgradient methods for convex problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q116219)