Gradient methods for minimizing composite functions
From MaRDI portal
Publication:359630
DOI10.1007/S10107-012-0629-5zbMATH Open1287.90067OpenAlexW2030161963MaRDI QIDQ359630FDOQ359630
Authors: Yuri Nesterov
Publication date: 12 August 2013
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-012-0629-5
Recommendations
Numerical mathematical programming methods (65K05) Convex programming (90C25) Fractional programming (90C32) Nonlinear programming (90C30)
Cites Work
- Title not available (Why is that?)
- Atomic Decomposition by Basis Pursuit
- Title not available (Why is that?)
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Just relax: convex programming methods for identifying sparse signals in noise
- Title not available (Why is that?)
- Title not available (Why is that?)
- A generalized proximal point algorithm for certain non-convex minimization problems
- Rounding of convex sets and efficient gradient methods for linear programming problems
- Linear Inversion of Band-Limited Reflection Seismograms
- Accelerating the cubic regularization of Newton's method on convex problems
Cited In (only showing first 100 items - show all)
- A family of subgradient-based methods for convex optimization problems in a unifying framework
- Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs
- Proximal Newton-type methods for minimizing composite functions
- Optimal subgradient methods: computational properties for large-scale linear inverse problems
- Title not available (Why is that?)
- Block-wise ADMM with a relaxation factor for multiple-block convex programming
- PPA-like contraction methods for convex optimization: a framework using variational inequality approach
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Parallel coordinate descent methods for big data optimization
- Accelerated directional search with non-Euclidean prox-structure
- Block-simultaneous direction method of multipliers: a proximal primal-dual splitting algorithm for nonconvex problems with multiple constraints
- Inexact coordinate descent: complexity and preconditioning
- Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization
- Analysis and design of optimization algorithms via integral quadratic constraints
- Linear coupling: an ultimate unification of gradient and mirror descent
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- Hierarchical sparse modeling: a choice of two group Lasso formulations
- First-order methods for convex optimization
- General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- Accelerated gradient boosting
- Distributed proximal-gradient method for convex optimization with inequality constraints
- Exact worst-case performance of first-order methods for composite convex optimization
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Matrix completion via max-norm constrained optimization
- OSGA: a fast subgradient algorithm with optimal complexity
- Fast first-order methods for minimizing convex composite functions
- Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds
- Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
- A Trust-region Method for Nonsmooth Nonconvex Optimization
- Universal gradient methods for convex optimization problems
- A dual method for minimizing a nonsmooth objective over one smooth inequality constraint
- Gradient sliding for composite optimization
- Optimized first-order methods for smooth convex minimization
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Efficient first-order methods for convex minimization: a constructive approach
- Subsampled nonmonotone spectral gradient methods
- Stochastic intermediate gradient method for convex problems with stochastic inexact oracle
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- On the complexity analysis of randomized block-coordinate descent methods
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Universal method for stochastic composite optimization problems
- An inertial forward-backward algorithm for monotone inclusions
- Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee
- A forward-backward splitting method for monotone inclusions without cocoercivity
- On full Jacobian decomposition of the augmented Lagrangian method for separable convex programming
- A strictly contractive Peaceman-Rachford splitting method with logarithmic-quadratic proximal regularization for convex programming
- Optimal subgradient algorithms for large-scale convex optimization in simple domains
- Solving structured nonsmooth convex optimization with complexity \(\mathcal {O}(\varepsilon ^{-1/2})\)
- Best subset selection via a modern optimization lens
- Improved algorithms for convex minimization in relative scale
- iPiasco: inertial proximal algorithm for strongly convex optimization
- Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis
- A Barzilai-Borwein type method for minimizing composite functions
- Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem
- Convergence analysis of the generalized alternating direction method of multipliers with logarithmic-quadratic proximal regularization
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Stochastic primal-dual coordinate method for regularized empirical risk minimization
- Primal-dual subgradient methods for convex problems
- An introduction to continuous optimization for imaging
- Metric selection in fast dual forward-backward splitting
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- Composite self-concordant minimization
- A smoothing stochastic gradient method for composite optimization
- An optimal subgradient algorithm with subspace search for costly convex optimization problems
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- A proximal partially parallel splitting method for separable convex programs
- Fast first-order methods for composite convex optimization with backtracking
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Block-wise alternating direction method of multipliers for multiple-block convex programming and beyond
- On the complexity of parallel coordinate descent
- Proximal-proximal-gradient method
- Scattered data interpolation with nonnegative preservation using bivariate splines and its application
- Convergence analysis of positive-indefinite proximal ADMM with a Glowinski's relaxation factor
- Alternating direction method of multipliers with variable metric indefinite proximal terms for convex optimization
- A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
- Accelerated proximal algorithms with a correction term for monotone inclusions
- Sorted concave penalized regression
- Inexact successive quadratic approximation for regularized optimization
- Linesearch Newton-CG methods for convex optimization with noise
- Generalized conjugate gradient methods for \(\ell_1\) regularized convex quadratic programming with finite convergence
- A primal-dual flow for affine constrained convex optimization
- A proximal strictly contractive Peaceman-Rachford splitting method for convex programming with applications to imaging
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- On the generation of sampling schemes for magnetic resonance imaging
- Data-driven nonsmooth optimization
- The cyclic block conditional gradient method for convex optimization problems
- A multilevel proximal gradient algorithm for a class of composite optimization problems
- Relatively smooth convex optimization by first-order methods, and applications
- Generalized affine scaling algorithms for linear programming problems
- DC formulations and algorithms for sparse optimization problems
- Accelerating the DC algorithm for smooth functions
- An accelerated randomized proximal coordinate gradient method and its application to regularized empirical risk minimization
- Self adaptive inertial extragradient algorithms for solving bilevel pseudomonotone variational inequality problems
- Activity identification and local linear convergence of forward-backward-type methods
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Catalyst acceleration for first-order convex optimization: from theory to practice
Uses Software
This page was built for publication: Gradient methods for minimizing composite functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q359630)