Primal-dual subgradient methods for convex problems
From MaRDI portal
Recommendations
- Barrier subgradient method
- A class of convergent primal-dual subgradient algorithms for decomposable convex programs
- Quasi-monotone subgradient methods for nonsmooth convex minimization
- Gradient methods for minimizing composite functions
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
Cites work
- scientific article; zbMATH DE number 858900 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- scientific article; zbMATH DE number 3282977 (Why is no real title available?)
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Interior projection-like methods for monotone variational inequalities
- Introductory lectures on convex optimization. A basic course.
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Smooth minimization of non-smooth functions
- The ordered subsets mirror descent optimization method with applications to tomography
- Two ``well-known properties of subgradient optimization
Cited in
(only showing first 100 items - show all)- Coordinate Descent Face-Off: Primal or Dual?
- A survey of algorithms and analysis for adaptive online learning
- Scale-free online learning
- Unifying mirror descent and dual averaging
- A family of subgradient-based methods for convex optimization problems in a unifying framework
- Robust Accelerated Primal-Dual Methods for Computing Saddle Points
- Essentials of numerical nonsmooth optimization
- Proximal algorithms for multicomponent image recovery problems
- Surplus-based accelerated algorithms for distributed optimization over directed networks
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Primal dual methods for Wasserstein gradient flows
- Sampling from conditional distributions of simplified vines
- Gradient projection Newton algorithm for sparse collaborative learning using synthetic and real datasets of applications
- Learning in nonatomic games. I: Finite action spaces and population games
- Replicator dynamics: old and new
- Incremental subgradient algorithms with dynamic step sizes for separable convex optimizations
- Aggregation of estimators and stochastic optimization
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- CV@R-penalised portfolio optimisation with biased stochastic mirror descent
- Discrete choice prox-functions on the simplex
- An optimal method for stochastic composite optimization
- Inexact model: a framework for optimization and variational inequalities
- Large-scale unit commitment under uncertainty: an updated literature survey
- Linear coupling: an ultimate unification of gradient and mirror descent
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- Multi-fidelity No-U-Turn Sampling
- Bayesian Conditional Transformation Models
- An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis
- Aggregate subgradient method for nonsmooth DC optimization
- Stochastic incremental mirror descent algorithms with Nesterov smoothing
- An incremental mirror descent subgradient algorithm with random sweeping and proximal step
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises
- Efficient Bayesian inversion for simultaneous estimation of geometry and spatial field using the Karhunen-Loève expansion
- Distributed quasi-monotone subgradient algorithm for nonsmooth convex optimization over directed graphs
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- The rate of convergence of Bregman proximal methods: local geometry versus regularity versus sharpness
- Subgradient ellipsoid method for nonsmooth convex problems
- Subgradient methods for saddle-point problems
- A randomized progressive hedging algorithm for stochastic variational inequality
- First-order methods for convex optimization
- A level-set method for convex optimization with a feasible solution path
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- Communication-computation tradeoff in distributed consensus optimization for MPC-based coordinated control under wireless communications
- Hypodifferentials of nonsmooth convex functions and their applications to nonsmooth convex optimization
- Learning in games with continuous action sets and unknown payoff functions
- Relatively smooth convex optimization by first-order methods, and applications
- String-averaging incremental stochastic subgradient algorithms
- Optimization methods for large-scale machine learning
- A new computational framework for log-concave density estimation
- On the computational efficiency of subgradient methods: a case study with Lagrangian bounds
- Incrementally updated gradient methods for constrained and regularized optimization
- Essentials of numerical nonsmooth optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- Stochastic approximation based confidence regions for stochastic variational inequalities
- On the convergence of gradient-like flows with noisy gradient input
- Stochastic mirror descent method for linear ill-posed problems in Banach spaces
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
- Approximate Newton Policy Gradient Algorithms
- Learning equilibrium in bilateral bargaining games
- Stochastic algorithms with geometric step decay converge linearly on sharp functions
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Accelerated dual-averaging primal–dual method for composite convex minimization
- A graph-based decomposition method for convex quadratic optimization with indicators
- Block stochastic gradient iteration for convex and nonconvex optimization
- OSGA: a fast subgradient algorithm with optimal complexity
- On efficient randomized algorithms for finding the PageRank vector
- A two-point heuristic to calculate the stepsize in subgradient method with application to a network design problem
- Optimization problems in statistical learning: duality and optimality conditions
- Non-Euclidean restricted memory level method for large-scale convex optimization
- Bregman methods for large-scale optimization with applications in imaging
- Learning variational autoencoders via MCMC speed measures
- A partially inexact bundle method for convex semi-infinite minmax problems
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- Gradient-free method for nonsmooth distributed optimization
- On the initialization for convex-concave min-max problems
- On the last iterate convergence of momentum methods
- Inertial game dynamics and applications to constrained optimization
- Universal gradient methods for convex optimization problems
- Primal–dual exterior point method for convex optimization
- A game-theory-based scheme to facilitate consensus latency minimization in sharding blockchain
- Stochastic mirror descent dynamics and their convergence in monotone variational inequalities
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Two-layer neural network on infinite-dimensional data: global optimization guarantee in the mean-field regime
- A relax-and-cut framework for large-scale maximum weight connected subgraph problems
- Iteratively reweighted \(\ell _1\) algorithms with extrapolation
- Inexact dual averaging method for distributed multi-agent optimization
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- Optimal anytime regret with two experts
- An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization
- Structured sparsity: discrete and convex approaches
- An algebraic theory for primal and dual substructuring methods by constraints
- A primal-dual approach to inexact subgradient methods
- Primal convergence from dual subgradient methods for convex optimization
- lmls
- Asymptotic properties of dual averaging algorithm for constrained distributed stochastic optimization
- The approximate duality gap technique: a unified theory of first-order methods
- Asymptotic normality and optimality in nonsmooth stochastic approximation
- Distributed linear regression by averaging
This page was built for publication: Primal-dual subgradient methods for convex problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q116219)