Gradient methods for minimizing composite functions (Q359630): Difference between revisions
From MaRDI portal
Created a new Item |
Normalize DOI. |
||
(7 intermediate revisions by 7 users not shown) | |||
Property / DOI | |||
Property / DOI: 10.1007/s10107-012-0629-5 / rank | |||
Property / author | |||
Property / author: M. Dambrine / rank | |||
Normal rank | |||
Property / review text | |||
Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods. | |||
Property / review text: Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods. / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Jörg Thierfelder / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C30 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C25 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 90C32 / rank | |||
Normal rank | |||
Property / Mathematics Subject Classification ID | |||
Property / Mathematics Subject Classification ID: 65K05 / rank | |||
Normal rank | |||
Property / zbMATH DE Number | |||
Property / zbMATH DE Number: 6197840 / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
composite function | |||
Property / zbMATH Keywords: composite function / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
gradient method | |||
Property / zbMATH Keywords: gradient method / rank | |||
Normal rank | |||
Property / zbMATH Keywords | |||
composite gradient mapping | |||
Property / zbMATH Keywords: composite gradient mapping / rank | |||
Normal rank | |||
Property / describes a project that uses | |||
Property / describes a project that uses: PDCO / rank | |||
Normal rank | |||
Property / MaRDI profile type | |||
Property / MaRDI profile type: MaRDI publication profile / rank | |||
Normal rank | |||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/s10107-012-0629-5 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2030161963 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Atomic Decomposition by Basis Pursuit / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: A generalized proximal point algorithm for certain non-convex minimization problems / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4806224 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Smooth minimization of non-smooth functions / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Rounding of convex sets and efficient gradient methods for linear programming problems / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Accelerating the cubic regularization of Newton's method on convex problems / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4324980 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5652137 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Linear Inversion of Band-Limited Reflection Seismograms / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4864293 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Just relax: convex programming methods for identifying sparse signals in noise / rank | |||
Normal rank | |||
Property / DOI | |||
Property / DOI: 10.1007/S10107-012-0629-5 / rank | |||
Normal rank | |||
links / mardi / name | links / mardi / name | ||
Latest revision as of 15:17, 9 December 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Gradient methods for minimizing composite functions |
scientific article |
Statements
Gradient methods for minimizing composite functions (English)
0 references
12 August 2013
0 references
Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods.
0 references
composite function
0 references
gradient method
0 references
composite gradient mapping
0 references