Interior Gradient and Proximal Methods for Convex and Conic Optimization
From MaRDI portal
Publication:5470216
DOI10.1137/S1052623403427823zbMath1113.90118OpenAlexW1986891697MaRDI QIDQ5470216
Alfred Auslender, Marc Teboulle
Publication date: 30 May 2006
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/s1052623403427823
convex optimizationconic optimizationproximal distancesconvergence and efficiencyinterior gradient/subgradient algorithms
Related Items
First-order methods for convex optimization, Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization, WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions, The interior proximal extragradient method for solving equilibrium problems, An accelerated coordinate gradient descent algorithm for non-separable composite optimization, OSGA: a fast subgradient algorithm with optimal complexity, Accelerated gradient sliding for structured convex optimization, Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces, Inexact multi-objective local search proximal algorithms: application to group dynamic and distributive justice problems, A Preconditioner for A Primal-Dual Newton Conjugate Gradient Method for Compressed Sensing Problems, On Convex Finite-Dimensional Variational Methods in Imaging Sciences and Hamilton--Jacobi Equations, Gradient sliding for composite optimization, An accelerated first-order method for solving SOS relaxations of unconstrained polynomial optimization problems, New results on subgradient methods for strongly convex optimization problems with a unified analysis, An inexact proximal method for quasiconvex minimization, Faster Lagrangian-Based Methods in Convex Optimization, Optimal complexity and certification of Bregman first-order methods, A Strictly Contractive Peaceman-Rachford Splitting Method with Logarithmic-Quadratic Proximal Regularization for Convex Programming, A simplified view of first order methods for optimization, A smoothing stochastic gradient method for composite optimization, First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems, An inexact algorithm with proximal distances for variational inequalities, Approximation accuracy, gradient methods, and error bound for structured convex optimization, Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant, Algorithms for stochastic optimization with function or expectation constraints, An alternating direction method for finding Dantzig selectors, Accelerated schemes for a class of variational inequalities, An optimal subgradient algorithm with subspace search for costly convex optimization problems, A new convergence analysis and perturbation resilience of some accelerated proximal forward–backward algorithms with errors, Pareto solutions as limits of collective traps: an inexact multiobjective proximal point algorithm, An inexact scalarization proximal point method for multiobjective quasiconvex minimization, An extrapolated iteratively reweighted \(\ell_1\) method with complexity analysis, Iteratively reweighted \(\ell _1\) algorithms with extrapolation, A simple convergence analysis of Bregman proximal gradient algorithm, Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization, Convergence of the exponentiated gradient method with Armijo line search, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, An inexact proximal method with proximal distances for quasimonotone equilibrium problems, A Bregman stochastic method for nonconvex nonsmooth problem beyond global Lipschitz gradient continuity, Optimal subgradient algorithms for large-scale convex optimization in simple domains, The multiproximal linearization method for convex composite problems, Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming, Convergence rates of gradient methods for convex optimization in the space of measures, No-regret algorithms in on-line learning, games and convex optimization, An extension of proximal methods for quasiconvex minimization on the nonnegative orthant, Inexact version of Bregman proximal gradient algorithm, An LQP-based two-step method for structured variational inequalities, An optimal method for stochastic composite optimization, An interior proximal method in vector optimization, Iteration-complexity of first-order penalty methods for convex programming, A very simple SQCQP method for a class of smooth convex constrained minimization problems with nice convergence results, Accelerated Uzawa methods for convex optimization, Convergence rate of a proximal multiplier algorithm for separable convex minimization, Bregman three-operator splitting methods, A self-adaptive descent LQP alternating direction method for the structured variational inequalities, On Decomposition Models in Imaging Sciences and Multi-time Hamilton--Jacobi Partial Differential Equations, A cyclic block coordinate descent method with generalized gradient projections, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization, An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems, Unnamed Item, Further study on the convergence rate of alternating direction method of multipliers with logarithmic-quadratic proximal regularization, Legendre transform and applications to finite and infinite optimization, Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\), Nonmonotone projected gradient methods based on barrier and Euclidean distances, Templates for convex cone problems with applications to sparse signal recovery, Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems, A proximal multiplier method for separable convex minimization, An optimal randomized incremental gradient method, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Interior proximal methods and central paths for convex second-order cone programming, Quartic first-order methods for low-rank minimization, Generalized Proximal Distances for Bilevel Equilibrium Problems, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization, Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization, Comparative study of RPSALG algorithm for convex semi-infinite programming, Phase retrieval from coded diffraction patterns, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities, Safe feature elimination for non-negativity constrained convex optimization, Interior proximal bundle algorithm with variable metric for nonsmooth convex symmetric cone programming, The modified second APG method for DC optimization problems, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, An interior proximal linearized method for DC programming based on Bregman distance or second-order homogeneous kernels, Coordinate descent with arbitrary sampling I: algorithms and complexity†, A family of subgradient-based methods for convex optimization problems in a unifying framework, Hessian Barrier Algorithms for Linearly Constrained Optimization Problems, Bregman primal-dual first-order method and application to sparse semidefinite programming, Subgradient methods for saddle-point problems, On the convergence of the entropy-exponential penalty trajectories and generalized proximal point methods in semidefinite optimization, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, An Accelerated Linearized Alternating Direction Method of Multipliers, Two-step inertial Bregman alternating minimization algorithm for nonconvex and nonsmooth problems, The Supporting Halfspace--Quadratic Programming Strategy for the Dual of the Best Approximation Problem, Numerical solution of an inverse random source problem for the time fractional diffusion equation via PhaseLift, An interior projected-like subgradient method for mixed variational inequalities, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators, Primal–dual first-order methods for a class of cone programming, Adaptive restart for accelerated gradient schemes, Accelerating variance-reduced stochastic gradient methods