An adaptive accelerated first-order method for convex optimization
From MaRDI portal
Publication:276852
DOI10.1007/s10589-015-9802-0zbMath1344.90049OpenAlexW2236131147MaRDI QIDQ276852
Renato D. C. Monteiro, Camilo Ortiz, Benar Fux Svaiter
Publication date: 4 May 2016
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-015-9802-0
Related Items
Iteration Complexity of an Inner Accelerated Inexact Proximal Augmented Lagrangian Method Based on the Classical Lagrangian Function, Projection-free accelerated method for convex optimization, Accelerated inexact composite gradient methods for nonconvex spectral optimization problems, Adaptive restart of the optimized gradient method for convex optimization, An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems, An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems, On inexact relative-error hybrid proximal extragradient, forward-backward and Tseng's modified forward-backward methods with inertial effects, An accelerated inexact dampened augmented Lagrangian method for linearly-constrained nonconvex composite optimization problems, An inexact Spingarn's partial inverse method with applications to operator splitting and composite optimization, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, On the convergence rate of the scaled proximal decomposition on the graph of a maximal monotone operator (SPDG) algorithm, Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions, Algorithm 996, Iteration complexity of an inexact Douglas-Rachford method and of a Douglas-Rachford-Tseng's F-B four-operator splitting method for solving monotone inclusions, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Cites Work
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- First-order methods of smooth convex optimization with inexact oracle
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Interior-point gradient method for large-scale totally nonnegative least squares problems
- A boundary point method to solve semidefinite programs
- Dual extrapolation and its applications to solving variational inequalities and related problems
- A computational study of a gradient-based log-barrier algorithm for a class of large-scale SDPs
- Introductory lectures on convex optimization. A basic course.
- Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming
- An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
- A Newton-CG Augmented Lagrangian Method for Semidefinite Programming
- An Optimal Algorithm for Constrained Differentiable Convex Optimization
- Benchmarking optimization software with performance profiles.