Pages that link to "Item:Q116219"
From MaRDI portal
The following pages link to Primal-dual subgradient methods for convex problems (Q116219):
Displaying 50 items.
- lmls (Q116220) (← links)
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks (Q301668) (← links)
- OSGA: a fast subgradient algorithm with optimal complexity (Q304218) (← links)
- New results on subgradient methods for strongly convex optimization problems with a unified analysis (Q316174) (← links)
- Subgradient method for nonconvex nonsmooth optimization (Q353174) (← links)
- An optimal method for stochastic composite optimization (Q431018) (← links)
- A sparsity preserving stochastic gradient methods for sparse regression (Q457215) (← links)
- Inexact dual averaging method for distributed multi-agent optimization (Q460593) (← links)
- Dual subgradient algorithms for large-scale nonsmooth learning problems (Q484132) (← links)
- Primal-dual methods for solving infinite-dimensional games (Q493268) (← links)
- Universal gradient methods for convex optimization problems (Q494332) (← links)
- Saddle point mirror descent algorithm for the robust PageRank problem (Q505294) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- A continuous-time approach to online optimization (Q520967) (← links)
- Ergodic, primal convergence in dual subgradient schemes for convex programming. II: The case of inconsistent primal problems (Q526828) (← links)
- Optimization problems in statistical learning: duality and optimality conditions (Q545113) (← links)
- Approximation accuracy, gradient methods, and error bound for structured convex optimization (Q607498) (← links)
- Pegasos: primal estimated sub-gradient solver for SVM (Q633112) (← links)
- Barrier subgradient method (Q633113) (← links)
- Distributed dual averaging method for multi-agent optimization with quantized communication (Q694789) (← links)
- Sample size selection in optimization methods for machine learning (Q715253) (← links)
- Make \(\ell_1\) regularization effective in training sparse CNN (Q782914) (← links)
- Convergence rates of subgradient methods for quasi-convex optimization problems (Q782917) (← links)
- Replicator dynamics: old and new (Q828036) (← links)
- Feature-aware regularization for sparse online learning (Q893629) (← links)
- A partially inexact bundle method for convex semi-infinite minmax problems (Q907208) (← links)
- Subgradient methods for saddle-point problems (Q1035898) (← links)
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises (Q1616222) (← links)
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities (Q1626529) (← links)
- A relax-and-cut framework for large-scale maximum weight connected subgraph problems (Q1652399) (← links)
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints (Q1683173) (← links)
- On the computational efficiency of subgradient methods: a case study with Lagrangian bounds (Q1697974) (← links)
- Scale-free online learning (Q1704560) (← links)
- Learning in games with continuous action sets and unknown payoff functions (Q1717237) (← links)
- Approximate dual averaging method for multiagent saddle-point problems with stochastic subgradients (Q1717855) (← links)
- Large-scale unit commitment under uncertainty: an updated literature survey (Q1730531) (← links)
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs (Q1737711) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Universal method for stochastic composite optimization problems (Q1746349) (← links)
- Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\) (Q1752352) (← links)
- Complexity bounds for primal-dual methods minimizing the model of objective function (Q1785201) (← links)
- An improved Lagrangian relaxation and dual ascent approach to facility location problems (Q1789574) (← links)
- Proximal algorithms for multicomponent image recovery problems (Q1932869) (← links)
- Aggregate subgradient method for nonsmooth DC optimization (Q1996741) (← links)
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions (Q2010105) (← links)
- Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications (Q2012089) (← links)
- Gradient-free method for nonsmooth distributed optimization (Q2018475) (← links)
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches (Q2022225) (← links)
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach (Q2031939) (← links)
- Distributed linear regression by averaging (Q2039793) (← links)