Fast alternating linearization methods for minimizing the sum of two convex functions (Q378095): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Importer (talk | contribs)
Changed an Item
Property / author
 
Property / author: Shi-Qian Ma / rank
 
Normal rank
Property / review text
 
The authors propose the following convex optimization problem: (1) \(\min_{x \in \mathbb R^n} F(x) \equiv f(x) + g(x)\), where \((f,g):\mathbb R^n \rightarrow \mathbb R\) are both convex functions such that two problems are easy to solve for any \(\mu >0\) and \(z \in \mathbb R^n\) relative to minimizing \(F(x)\): (2) \(\min_{x \in \mathbb R^n} \{f(x) + \frac{1}{2\mu}\|x-z\|^2\}\) and (3) \(\min_{x \in \mathbb R^n} \{g(x) + \frac{1}{2\mu}\|x-z\|^2\}\) where \(f(.)\) (\(g(.)\), resp.) is a function of a matrix \(X \in \mathbb R^{m\times n}\). Problem (2) ((3), resp.) is associated with a matrix \(Z \in \mathbb R^{m \times n}\) instead of \(z \in \mathbb R^n\) and the \(\|. \|\) norm is the Frobenius norm. For large-scale problems, the class of alternating direction methods that are based on a variable splitting combined with the augmented Lagrangian method are particularly important. In these methods, one splits the variable \(x\) into two variables, i.e., one introduces a new variable \(y\) and rewrites Problem (1) as (4) \(\min \{ f(x) + g(y): x-y = 0\). Since problem (4) is an equality constrained problem, the augmented Lagrangian method can be used to solve it. Another important and related class of algorithms for solving (1) is based on operator splitting. The aim of these algorithms is to find an \(x\) such that \(0 \in T_1 (x) + T_2 (x)\), where \(T_1\) and \(T_2\) are maximal monotone operators. This is a more general problem than (1). Main result: The global convergence results for various splitting and alternating direction algorithms are established under appropriate conditions, the authors' interest here is on iteration complexity bounds for such algorithms. Finally, numerical results are proposed. These results demonstrate the efficiency and the practical potential of authors' algorithms.
Property / review text: The authors propose the following convex optimization problem: (1) \(\min_{x \in \mathbb R^n} F(x) \equiv f(x) + g(x)\), where \((f,g):\mathbb R^n \rightarrow \mathbb R\) are both convex functions such that two problems are easy to solve for any \(\mu >0\) and \(z \in \mathbb R^n\) relative to minimizing \(F(x)\): (2) \(\min_{x \in \mathbb R^n} \{f(x) + \frac{1}{2\mu}\|x-z\|^2\}\) and (3) \(\min_{x \in \mathbb R^n} \{g(x) + \frac{1}{2\mu}\|x-z\|^2\}\) where \(f(.)\) (\(g(.)\), resp.) is a function of a matrix \(X \in \mathbb R^{m\times n}\). Problem (2) ((3), resp.) is associated with a matrix \(Z \in \mathbb R^{m \times n}\) instead of \(z \in \mathbb R^n\) and the \(\|. \|\) norm is the Frobenius norm. For large-scale problems, the class of alternating direction methods that are based on a variable splitting combined with the augmented Lagrangian method are particularly important. In these methods, one splits the variable \(x\) into two variables, i.e., one introduces a new variable \(y\) and rewrites Problem (1) as (4) \(\min \{ f(x) + g(y): x-y = 0\). Since problem (4) is an equality constrained problem, the augmented Lagrangian method can be used to solve it. Another important and related class of algorithms for solving (1) is based on operator splitting. The aim of these algorithms is to find an \(x\) such that \(0 \in T_1 (x) + T_2 (x)\), where \(T_1\) and \(T_2\) are maximal monotone operators. This is a more general problem than (1). Main result: The global convergence results for various splitting and alternating direction algorithms are established under appropriate conditions, the authors' interest here is on iteration complexity bounds for such algorithms. Finally, numerical results are proposed. These results demonstrate the efficiency and the practical potential of authors' algorithms. / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Jan Lovíšek / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 65K05 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 68Q25 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C25 / rank
 
Normal rank
Property / Mathematics Subject Classification ID
 
Property / Mathematics Subject Classification ID: 90C06 / rank
 
Normal rank
Property / zbMATH DE Number
 
Property / zbMATH DE Number: 6225201 / rank
 
Normal rank
Property / zbMATH Keywords
 
convex optimization
Property / zbMATH Keywords: convex optimization / rank
 
Normal rank
Property / zbMATH Keywords
 
variable splitting
Property / zbMATH Keywords: variable splitting / rank
 
Normal rank
Property / zbMATH Keywords
 
alternating linearization method
Property / zbMATH Keywords: alternating linearization method / rank
 
Normal rank
Property / zbMATH Keywords
 
alternating direction method
Property / zbMATH Keywords: alternating direction method / rank
 
Normal rank
Property / zbMATH Keywords
 
augmented Langrangian method
Property / zbMATH Keywords: augmented Langrangian method / rank
 
Normal rank
Property / zbMATH Keywords
 
optimal gradient method
Property / zbMATH Keywords: optimal gradient method / rank
 
Normal rank
Property / zbMATH Keywords
 
Gauss-Seidel method
Property / zbMATH Keywords: Gauss-Seidel method / rank
 
Normal rank
Property / zbMATH Keywords
 
Peaceman-Rachford method
Property / zbMATH Keywords: Peaceman-Rachford method / rank
 
Normal rank
Property / zbMATH Keywords
 
convergence
Property / zbMATH Keywords: convergence / rank
 
Normal rank
Property / zbMATH Keywords
 
algorithm
Property / zbMATH Keywords: algorithm / rank
 
Normal rank
Property / zbMATH Keywords
 
subgradient
Property / zbMATH Keywords: subgradient / rank
 
Normal rank
Property / zbMATH Keywords
 
Lipschitz constant
Property / zbMATH Keywords: Lipschitz constant / rank
 
Normal rank
Property / zbMATH Keywords
 
Lipschitz continuous gradient
Property / zbMATH Keywords: Lipschitz continuous gradient / rank
 
Normal rank
Property / zbMATH Keywords
 
large-scale problems
Property / zbMATH Keywords: large-scale problems / rank
 
Normal rank
Property / zbMATH Keywords
 
maximal monotone operators
Property / zbMATH Keywords: maximal monotone operators / rank
 
Normal rank
Property / zbMATH Keywords
 
iteration complexity bounds
Property / zbMATH Keywords: iteration complexity bounds / rank
 
Normal rank
Property / zbMATH Keywords
 
numerical results
Property / zbMATH Keywords: numerical results / rank
 
Normal rank

Revision as of 12:03, 29 June 2023

scientific article
Language Label Description Also known as
English
Fast alternating linearization methods for minimizing the sum of two convex functions
scientific article

    Statements

    Fast alternating linearization methods for minimizing the sum of two convex functions (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    11 November 2013
    0 references
    The authors propose the following convex optimization problem: (1) \(\min_{x \in \mathbb R^n} F(x) \equiv f(x) + g(x)\), where \((f,g):\mathbb R^n \rightarrow \mathbb R\) are both convex functions such that two problems are easy to solve for any \(\mu >0\) and \(z \in \mathbb R^n\) relative to minimizing \(F(x)\): (2) \(\min_{x \in \mathbb R^n} \{f(x) + \frac{1}{2\mu}\|x-z\|^2\}\) and (3) \(\min_{x \in \mathbb R^n} \{g(x) + \frac{1}{2\mu}\|x-z\|^2\}\) where \(f(.)\) (\(g(.)\), resp.) is a function of a matrix \(X \in \mathbb R^{m\times n}\). Problem (2) ((3), resp.) is associated with a matrix \(Z \in \mathbb R^{m \times n}\) instead of \(z \in \mathbb R^n\) and the \(\|. \|\) norm is the Frobenius norm. For large-scale problems, the class of alternating direction methods that are based on a variable splitting combined with the augmented Lagrangian method are particularly important. In these methods, one splits the variable \(x\) into two variables, i.e., one introduces a new variable \(y\) and rewrites Problem (1) as (4) \(\min \{ f(x) + g(y): x-y = 0\). Since problem (4) is an equality constrained problem, the augmented Lagrangian method can be used to solve it. Another important and related class of algorithms for solving (1) is based on operator splitting. The aim of these algorithms is to find an \(x\) such that \(0 \in T_1 (x) + T_2 (x)\), where \(T_1\) and \(T_2\) are maximal monotone operators. This is a more general problem than (1). Main result: The global convergence results for various splitting and alternating direction algorithms are established under appropriate conditions, the authors' interest here is on iteration complexity bounds for such algorithms. Finally, numerical results are proposed. These results demonstrate the efficiency and the practical potential of authors' algorithms.
    0 references
    0 references
    convex optimization
    0 references
    variable splitting
    0 references
    alternating linearization method
    0 references
    alternating direction method
    0 references
    augmented Langrangian method
    0 references
    optimal gradient method
    0 references
    Gauss-Seidel method
    0 references
    Peaceman-Rachford method
    0 references
    convergence
    0 references
    algorithm
    0 references
    subgradient
    0 references
    Lipschitz constant
    0 references
    Lipschitz continuous gradient
    0 references
    large-scale problems
    0 references
    maximal monotone operators
    0 references
    iteration complexity bounds
    0 references
    numerical results
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references