Fast first-order methods for composite convex optimization with backtracking
From MaRDI portal
Publication:404292
DOI10.1007/S10208-014-9189-9zbMATH Open1304.90161OpenAlexW1997616626MaRDI QIDQ404292FDOQ404292
Authors: Katya Scheinberg, Donald Goldfarb, Xi Bai
Publication date: 4 September 2014
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-014-9189-9
Recommendations
- Fast first-order methods for minimizing convex composite functions
- Exact worst-case performance of first-order methods for composite convex optimization
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- An adaptive accelerated first-order method for convex optimization
- A first-order splitting method for solving a large-scale composite convex optimization problem
- First-order methods for convex optimization
- Efficient first-order methods for convex minimization: a constructive approach
- Optimized first-order methods for smooth convex minimization
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Model Selection and Estimation in Regression with Grouped Variables
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Gradient methods for minimizing composite functions
- Fast alternating linearization methods for minimizing the sum of two convex functions
- Compressive sampling
Cited In (30)
- A scaled and adaptive FISTA algorithm for signal-dependent sparse image super-resolution problems
- An optimal high-order tensor method for convex optimization
- Parameter-free accelerated gradient descent for nonconvex minimization
- Scaled, inexact, and adaptive generalized FISTA for strongly convex optimization
- Accelerated linearized Bregman method
- Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
- Additive Schwarz methods for convex optimization with backtracking
- Accelerated methods with fastly vanishing subgradients for structured non-smooth minimization
- Network classification with applications to brain connectomics
- On the interplay between acceleration and identification for the proximal gradient algorithm
- Accelerated Bregman operator splitting with backtracking
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Fast first-order methods for minimizing convex composite functions
- Second order semi-smooth proximal Newton methods in Hilbert spaces
- Globalized inexact proximal Newton-type methods for nonconvex composite functions
- A variable metric and Nesterov extrapolated proximal DCA with backtracking for a composite DC program
- Inexact proximal Newton methods for self-concordant functions
- Parameter-free FISTA by adaptive restart and backtracking
- Complementary composite minimization, small gradients in general norms, and applications
- An abstract convergence framework with application to inertial inexact forward-backward methods
- Fast alternating linearization methods for minimizing the sum of two convex functions
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications
- Proximal extrapolated gradient methods for variational inequalities
- An improved parameterized fast iterative shrinkage-thresholding algorithm with adaptive step size and its applications
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- Some modified fast iterative shrinkage thresholding algorithms with a new adaptive non-monotone stepsize strategy for nonsmooth and convex minimization problems
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients
- Exact gradient methods with memory
Uses Software
This page was built for publication: Fast first-order methods for composite convex optimization with backtracking
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q404292)