Mirror descent and nonlinear projected subgradient methods for convex optimization.

From MaRDI portal
Publication:1811622

DOI10.1016/S0167-6377(02)00231-6zbMath1046.90057MaRDI QIDQ1811622

Marc Teboulle, Amir Beck

Publication date: 17 June 2003

Published in: Operations Research Letters (Search for Journal in Brave)




Related Items

Efficient learning of discrete graphical models*, An inexact first-order method for constrained nonlinear optimization, Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART 5. THE CASE OF TRAJECTORIAL MEASUREMENTS WITH ERROR, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART 6. STATISTICAL SOLVING OF THE PROBLEM OF DIFFERENTIAL DIAGNOSTICS, Unifying mirror descent and dual averaging, Distributed mirror descent algorithm over unbalanced digraphs based on gradient weighting technique, Large-Scale Nonconvex Optimization: Randomization, Gap Estimation, and Numerical Resolution, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Block Policy Mirror Descent, Stochastic mirror descent method for linear ill-posed problems in Banach spaces, A fast adaptive algorithm for nonlinear inverse problems with convex penalty, Stochastic composition optimization of functions without Lipschitz continuous gradient, Faster randomized block sparse Kaczmarz by averaging, Smooth over-parameterized solvers for non-smooth structured optimization, No-regret algorithms in on-line learning, games and convex optimization, Mirror variational transport: a particle-based algorithm for distributional optimization on constrained domains, Conformal mirror descent with logarithmic divergences, Bregman-Golden ratio algorithms for variational inequalities, Stochastic incremental mirror descent algorithms with Nesterov smoothing, Learning Stationary Nash Equilibrium Policies in \(n\)-Player Stochastic Games with Independent Chains, The optimal dynamic regret for smoothed online convex optimization with squared \(l_2\) norm switching costs, Block mirror stochastic gradient method for stochastic optimization, Continuous time learning algorithms in optimization and game theory, Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence, First-order methods for convex optimization, Data-Driven Mirror Descent with Input-Convex Neural Networks, Local convexity of the TAP free energy and AMP convergence for \(\mathbb{Z}_2\)-synchronization, Entropic Trust Region for Densest Crystallographic Symmetry Group Packings, Optimal Scheduling of Entropy Regularizer for Continuous-Time Linear-Quadratic Reinforcement Learning, A non-linear conjugate gradient in dual space for \(L_p\)-norm regularized non-linear least squares with application in data assimilation, On the Adaptivity of Stochastic Gradient-Based Optimization, Optimization-Based Calibration of Simulation Input Models, Analysis of Online Composite Mirror Descent Algorithm, Interior-Point-Based Online Stochastic Bin Packing, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART 4. THE CASE OF EXACT TRAJECTORIAL MEASUREMENTS, An introduction to continuous optimization for imaging, Scalable estimation strategies based on stochastic approximations: classical results and new insights, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART 1. MOTION EQUATIONS AND CLASSIFICATION OF MALFUNCTIONS, An entropic Landweber method for linear ill-posed problems, On the Convergence Time of a Natural Dynamics for Linear Programming, Essentials of numerical nonsmooth optimization, Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems, Adaptive Mirror Descent Algorithms for Convex and Strongly Convex Optimization Problems with Functional Constraints, Revisiting Deep Structured Models for Pixel-Level Labeling with Gradient-Based Inference, Modern regularization methods for inverse problems, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Generalized Conditional Gradient for Sparse Estimation, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, On the generalization of ECP and OA methods to nonsmooth convex MINLP problems, Privacy Aware Learning, Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction, Accelerated Iterative Regularization via Dual Diagonal Descent, On the Convergence of Mirror Descent beyond Stochastic Convex Programming, Stochastic Gradient Markov Chain Monte Carlo, Dual Space Preconditioning for Gradient Descent, On Modification of an Adaptive Stochastic Mirror Descent Algorithm for Convex Optimization Problems with Functional Constraints, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Continuous-domain assignment flows, Essentials of numerical nonsmooth optimization, Adaptive online distributed optimization in dynamic environments, Penalty and Augmented Lagrangian Methods for Constrained DC Programming, Asymptotically Optimal Sequential Design for Rank Aggregation, Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity, Implicit regularization with strongly convex bias: Stability and acceleration, Comparing different nonsmooth minimization methods and software, Bregman proximal gradient algorithms for deep matrix factorization, On risk concentration for convex combinations of linear estimators, On iteration complexity of a first-order primal-dual method for nonlinear convex cone programming, The Cyclic Block Conditional Gradient Method for Convex Optimization Problems, Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Projection algorithms for nonconvex minimization with application to sparse principal component analysis, A dual method for minimizing a nonsmooth objective over one smooth inequality constraint, On the ergodic convergence rates of a first-order primal-dual algorithm, New results on subgradient methods for strongly convex optimization problems with a unified analysis, Optimal distributed stochastic mirror descent for strongly convex optimization, Algorithms of inertial mirror descent in convex problems of stochastic optimization, A derivative-free comirror algorithm for convex optimization, Optimal complexity and certification of Bregman first-order methods, Sparse optimization on measures with over-parameterized gradient descent, A weighted mirror descent algorithm for nonsmooth convex optimization problem, Techniques for gradient-based bilevel optimization with non-smooth lower level problems, Global convergence of model function based Bregman proximal minimization algorithms, Generalized mirror descents in congestion games, A simplified view of first order methods for optimization, Block coordinate proximal gradient methods with variable Bregman functions for nonsmooth separable optimization, Hessian informed mirror descent, Training effective node classifiers for cascade classification, A fast dual proximal gradient algorithm for convex minimization and applications, A penalty algorithm for solving convex separable knapsack problems, Stochastic mirror descent method for distributed multi-agent optimization, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Laplacian-optimized diffusion for semi-supervised learning, Multi-view clustering via multi-manifold regularized non-negative matrix factorization, Generalized mirror descents with non-convex potential functions in atomic congestion games: continuous time and discrete time, Accelerated training of max-margin Markov networks with kernels, A multiplicative weight updates algorithm for packing and covering semi-infinite linear programs, The CoMirror algorithm for solving nonsmooth constrained convex problems, An optimal subgradient algorithm with subspace search for costly convex optimization problems, A simple convergence analysis of Bregman proximal gradient algorithm, Convergence of the exponentiated gradient method with Armijo line search, Augmented Lagrangian method with alternating constraints for nonlinear optimization problems, Feature-aware regularization for sparse online learning, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, Zeroth-order feedback optimization for cooperative multi-agent systems, Perturbed Fenchel duality and first-order methods, Large-scale distributed sparse class-imbalance learning, Recursive aggregation of estimators by the mirror descent algorithm with averaging, Iterative regularization via dual diagonal descent, Scale-free online learning, Mirror descent algorithms for minimizing interacting free energy, Event-triggered distributed online convex optimization with delayed bandit feedback, An efficient approach to solve the large-scale semidefinite programming problems, An incremental mirror descent subgradient algorithm with random sweeping and proximal step, Re-examination of Bregman functions and new properties of their divergences, Distributed constrained optimization via continuous-time mirror design, Learning in games with continuous action sets and unknown payoff functions, An alternating extragradient method with non Euclidean projections for saddle point problems, Near-optimal discrete optimization for experimental design: a regret minimization approach, A gradient descent perspective on Sinkhorn, A generalized online mirror descent with applications to classification and regression, Level-set methods for convex optimization, Diagonal bundle method for nonsmooth sparse optimization, Saddle point mirror descent algorithm for the robust PageRank problem, On the convergence time of a natural dynamics for linear programming, A version of the mirror descent method to solve variational inequalities, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), Bundle methods for sum-functions with ``easy components: applications to multicommodity network design, A continuous-time approach to online optimization, Nonmonotone projected gradient methods based on barrier and Euclidean distances, Projected subgradient minimization versus superiorization, Inertial alternating generalized forward-backward splitting for image colorization, Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute gradient for structured convex optimization, Convergence of online mirror descent, Natural gradient for combined loss using wavelets, Primal-dual subgradient methods for convex problems, A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems, Mass-spring-damper networks for distributed optimization in non-Euclidean spaces, Multi-manifold matrix decomposition for data co-clustering, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, A Laplacian approach to \(\ell_1\)-norm minimization, Fastest rates for stochastic mirror descent methods, Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities, Infinite-dimensional gradient-based descent for alpha-divergence minimisation, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART II. PROBLEM OF DIFFERENTIAL DIAGNOSTICS, PROBLEMS OF DIFFERENTIAL AND TOPOLOGICAL DIAGNOSTICS. PART 3. THE CHECKING PROBLEM, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, A family of subgradient-based methods for convex optimization problems in a unifying framework, Optimal subgradient methods: computational properties for large-scale linear inverse problems, Learning in Games via Reinforcement and Regularization, Inverse reinforcement learning in contextual MDPs, Subgradient methods for saddle-point problems, Subgradient and Bundle Methods for Nonsmooth Optimization, A telescopic Bregmanian proximal gradient method without the global Lipschitz continuity assumption, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, Acceptable set topic modeling, Interior projection-like methods for monotone variational inequalities, Curiosities and counterexamples in smooth convex optimization, Analysis of singular value thresholding algorithm for matrix completion, A distributed Bregman forward-backward algorithm for a class of Nash equilibrium problems, Network manipulation algorithm based on inexact alternating minimization, Quasi-monotone subgradient methods for nonsmooth convex minimization, Limited memory discrete gradient bundle method for nonsmooth derivative-free optimization, On the efficiency of a randomized mirror descent algorithm in online optimization problems, Solving variational inequalities with monotone operators on domains given by linear minimization oracles



Cites Work