Gradient methods for minimizing composite functions (Q359630): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Importer (talk | contribs)
Changed an Item
Property / author
 
Property / author: M. Dambrine / rank
 
Normal rank
Property / review text
 
Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods.
Property / review text: Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Jörg Thierfelder / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C30 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C25 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C32 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 65K05 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6197840 / rank
 
Normal rank
Property / zbMATH Keywords
 
composite function
Property / zbMATH Keywords: composite function / rank
 
Normal rank
Property / zbMATH Keywords
 
gradient method
Property / zbMATH Keywords: gradient method / rank
 
Normal rank
Property / zbMATH Keywords
 
composite gradient mapping
Property / zbMATH Keywords: composite gradient mapping / rank
 
Normal rank

Revision as of 10:17, 28 June 2023

scientific article
Language Label Description Also known as
English
Gradient methods for minimizing composite functions
scientific article

    Statements

    Gradient methods for minimizing composite functions (English)
    0 references
    0 references
    0 references
    12 August 2013
    0 references
    Let \(E\) be a finite-dimensional linear space with dual \(E^*\). The author provides new extended gradient methods for solving optimization problems of the form \[ \phi(x)= f(x)+ \psi(x)\to\min,\quad\text{s.t. } x\in Q, \] i.e. optimization problems where the objective function \(\phi: E\to\mathbb{R}\) is the sum of a smooth (not necessary convex) function \(f\) and a convex (not necessary smooth) function \(\psi\). The set \(Q\subset E\) is assumed to be a convex set. First it is pointed out that for general nonsmooth, nonconvex functions even resolving the question of whether a descent direction from a point exists is NP-hard. However, for the above-mentioned special form of the objective function, this problem is better solvable using the so-called composite gradient mapping \(g_L: E\to E^*\) introduced by \[ \begin{gathered} T_L(y):= \text{argmin}\Biggl\{f(y)+ \langle\nabla f(y),x-y\rangle+{L\over 2}\| x-y\|^2+ \psi(x)\mid x\in Q\Biggr\},\\ g_L(y):= L\cdot B(y- T_L(y)),\end{gathered} \] where \(B: E\to E^*\) is a fixed positive definite self-adjoint operator which defines the norm \(\| h\|=\langle Bh,h\rangle^{1/2}\) and \(L\) is a positive constant. That means that the objective of the composite gradient mapping is the sum of the objective of the known gradient mapping for smooth problems and the nonsmooth convex term. After the discussion of this mapping, some generalized gradient methods for the optimization problem are presented. It is shown that in convex and nonconvex cases there are exactly the same complexity results as in the usual smooth situation \((\psi=0)\). At the end of the paper, the author gives some examples of applications and some computational results of the proposed methods.
    0 references
    composite function
    0 references
    gradient method
    0 references
    composite gradient mapping
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references