Unconstrained optimization using the directional proximal point method

From MaRDI portal
Publication:6397692

arXiv2204.13370MaRDI QIDQ6397692FDOQ6397692


Authors: M. Y. Chung, Jinn Ho, Wen-Liang Hwang Edit this on Wikidata


Publication date: 28 April 2022

Abstract: This paper presents a directional proximal point method (DPPM) to derive the minimum of any C1-smooth function f. The proposed method requires a function persistent a local convex segment along the descent direction at any non-critical point (referred to a DLC direction at the point). The proposed DPPM can determine a DLC direction by solving a two-dimensional quadratic optimization problem, regardless of the dimensionally of the function variables. Along that direction, the DPPM then updates by solving a one-dimensional optimization problem. This gives the DPPM advantage over competitive methods when dealing with large-scale problems, involving a large number of variables. We show that the DPPM converges to critical points of f. We also provide conditions under which the entire DPPM sequence converges to a single critical point. For strongly convex quadratic functions, we demonstrate that the rate at which the error sequence converges to zero can be R-superlinear, regardless of the dimension of variables.













This page was built for publication: Unconstrained optimization using the directional proximal point method

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6397692)