Fast alternating linearization methods for minimizing the sum of two convex functions (Q378095)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Fast alternating linearization methods for minimizing the sum of two convex functions
scientific article

    Statements

    Fast alternating linearization methods for minimizing the sum of two convex functions (English)
    0 references
    0 references
    0 references
    0 references
    11 November 2013
    0 references
    The authors propose the following convex optimization problem: (1) \(\min_{x \in \mathbb R^n} F(x) \equiv f(x) + g(x)\), where \((f,g):\mathbb R^n \rightarrow \mathbb R\) are both convex functions such that two problems are easy to solve for any \(\mu >0\) and \(z \in \mathbb R^n\) relative to minimizing \(F(x)\): (2) \(\min_{x \in \mathbb R^n} \{f(x) + \frac{1}{2\mu}\|x-z\|^2\}\) and (3) \(\min_{x \in \mathbb R^n} \{g(x) + \frac{1}{2\mu}\|x-z\|^2\}\) where \(f(.)\) (\(g(.)\), resp.) is a function of a matrix \(X \in \mathbb R^{m\times n}\). Problem (2) ((3), resp.) is associated with a matrix \(Z \in \mathbb R^{m \times n}\) instead of \(z \in \mathbb R^n\) and the \(\|. \|\) norm is the Frobenius norm. For large-scale problems, the class of alternating direction methods that are based on a variable splitting combined with the augmented Lagrangian method are particularly important. In these methods, one splits the variable \(x\) into two variables, i.e., one introduces a new variable \(y\) and rewrites Problem (1) as (4) \(\min \{ f(x) + g(y): x-y = 0\). Since problem (4) is an equality constrained problem, the augmented Lagrangian method can be used to solve it. Another important and related class of algorithms for solving (1) is based on operator splitting. The aim of these algorithms is to find an \(x\) such that \(0 \in T_1 (x) + T_2 (x)\), where \(T_1\) and \(T_2\) are maximal monotone operators. This is a more general problem than (1). Main result: The global convergence results for various splitting and alternating direction algorithms are established under appropriate conditions, the authors' interest here is on iteration complexity bounds for such algorithms. Finally, numerical results are proposed. These results demonstrate the efficiency and the practical potential of authors' algorithms.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    convex optimization
    0 references
    variable splitting
    0 references
    alternating linearization method
    0 references
    alternating direction method
    0 references
    augmented Langrangian method
    0 references
    optimal gradient method
    0 references
    Gauss-Seidel method
    0 references
    Peaceman-Rachford method
    0 references
    convergence
    0 references
    algorithm
    0 references
    subgradient
    0 references
    Lipschitz constant
    0 references
    Lipschitz continuous gradient
    0 references
    large-scale problems
    0 references
    maximal monotone operators
    0 references
    iteration complexity bounds
    0 references
    numerical results
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references