A proximal subgradient algorithm with extrapolation for structured nonconvex nonsmooth problems (Q6141533)

From MaRDI portal





scientific article; zbMATH DE number 7780865
Language Label Description Also known as
default for all languages
No label defined
    English
    A proximal subgradient algorithm with extrapolation for structured nonconvex nonsmooth problems
    scientific article; zbMATH DE number 7780865

      Statements

      A proximal subgradient algorithm with extrapolation for structured nonconvex nonsmooth problems (English)
      0 references
      19 December 2023
      0 references
      In the paper under review, the authors study a following broad optimization problem which has many important applications in diverse areas, including power control problems, compressed sensing, portfolio optimization, supply chain problems, image segmentation and many others. The objective function is formed by the sum of a possibly nonsmooth nonconvex function and a differentiable function with Lipschitz continuous gradient, subtracted by a weakly convex function. This general framework allows for problems involving nonconvex loss functions and problems with specific nonconvex constraints. The problem is: \(\min_{x\in C} F(x)\) where \(F(x) := f (x) + h(Ax)-g(x)\) and where \(C\) is a nonempty closed subset of a finite-dimensional real Hilbert space \(H\), \(A\) is a linear mapping from \(H\) to another finite-dimensional real Hilbert space, \(f : H\to (-\infty,\infty]\) is a proper lower semicontinuous (possibly nonsmooth and nonconvex) function, \(h\) is a real valued differentiable (possibly nonconvex) function whose gradient is Lipschitz continuous and \(g:H\to (-\infty,\infty]\) is a continuous weakly convex function with modulus on an open convex set containing \(C\). Two examples are: (1) From statistical learning, \(\min_{x\in \mathbb R^d}(\phi(x) + \gamma r(x))\) where \(\phi\) is called a loss function which measures the data fitting, \(r\) is a regularization which promotes specific structure in the solution such as sparsity, and \(\gamma>0\) is a weighting parameter. (2) \(\min_{x\in \mathbb R^d} (f (x) + h(x)- g(x))\). The paper is well written with a good set of references.
      0 references
      0 references
      composite optimization problem
      0 references
      difference of convex
      0 references
      distributed energy resources
      0 references
      extrapolation
      0 references
      optimal power flow
      0 references
      proximal subgradient algorithm
      0 references
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references