An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization

From MaRDI portal
Publication:6411230

DOI10.1007/S10589-024-00560-0arXiv2209.09119MaRDI QIDQ6411230FDOQ6411230


Authors: Ruyu Liu, Shaohua Pan, Yuqia Wu, Xiao Qi Yang Edit this on Wikidata


Publication date: 19 September 2022

Abstract: This paper focuses on the minimization of a sum of a twice continuously differentiable function f and a nonsmooth convex function. We propose an inexact regularized proximal Newton method by an approximation of the Hessian abla2!f(x) involving the varrhoth power of the KKT residual. For varrho=0, we demonstrate the global convergence of the iterate sequence for the KL objective function and its R-linear convergence rate for the KL objective function of exponent 1/2. For varrhoin(0,1), we establish the global convergence of the iterate sequence and its superlinear convergence rate of order q(1!+!varrho) under an assumption that cluster points satisfy a local H"{o}lderian local error bound of order qin(max(varrho,frac11+varrho),1] on the strong stationary point set; and when cluster points satisfy a local error bound of order q>1+varrho on the common stationary point set, we also obtain the global convergence of the iterate sequence, and its superlinear convergence rate of order frac(qvarrho)2q if q>frac2varrho+1+sqrt4varrho+12. A dual semismooth Newton augmented Lagrangian method is developed for seeking an inexact minimizer of subproblem. Numerical comparisons with two state-of-the-art methods on ell1-regularized Student's t-regression, group penalized Student's t-regression, and nonconvex image restoration confirm the efficiency of the proposed method.





Recommendations




Cited In (3)





This page was built for publication: An inexact regularized proximal Newton method for nonconvex and nonsmooth optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6411230)