MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization
From MaRDI portal
Publication:6508744
arXiv2302.04077MaRDI QIDQ6508744FDOQ6508744
Authors: Andersen Man Shun Ang, H. De Sterck, Stephen A. Vavasis
Abstract: We study the combination of proximal gradient descent with multigrid for solving a class of possibly nonsmooth strongly convex optimization problems. We propose a multigrid proximal gradient method called MGProx, which accelerates the proximal gradient method by multigrid, based on utilizing hierarchical information of the optimization problem. MGProx applies a newly introduced adaptive restriction operator to simplify the Minkowski sum of subdifferentials of the nondifferentiable objective function across different levels. We provide a theoretical characterization of MGProx. First we show that variables at all levels exhibit a fixed-point property at convergence. Next, we show that the coarse correction is a descent direction for the fine variable in the general nonsmooth case. Lastly, under some mild assumptions we provide the convergence rate for the algorithm. In the numerical experiments, we show that MGProx has a significantly faster convergence speed than proximal gradient descent and proximal gradient descent with Nesterov's acceleration on nonsmooth convex optimization problems such as the Elastic Obstacle Problem.
Numerical mathematical programming methods (65K05) Convex programming (90C25) Applications of mathematical programming (90C90) Multigrid methods; domain decomposition for boundary value problems involving PDEs (65N55) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Nonsmooth analysis (49J52)
This page was built for publication: MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6508744)