An inexact successive quadratic approximation method for L-1 regularized optimization

From MaRDI portal
Publication:301652

DOI10.1007/S10107-015-0941-YzbMATH Open1342.49037arXiv1309.3529OpenAlexW2162870776MaRDI QIDQ301652FDOQ301652

Figen Oztoprak, R. H. Byrd, Jorge Nocedal

Publication date: 1 July 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: We study a Newton-like method for the minimization of an objective function that is the sum of a smooth convex function and an l-1 regularization term. This method, which is sometimes referred to in the literature as a proximal Newton method, computes a step by minimizing a piecewise quadratic model of the objective function. In order to make this approach efficient in practice, it is imperative to perform this inner minimization inexactly. In this paper, we give inexactness conditions that guarantee global convergence and that can be used to control the local rate of convergence of the iteration. Our inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. We give careful consideration to the algorithm employed for the inner minimization, and report numerical results on two test sets originating in machine learning.


Full work available at URL: https://arxiv.org/abs/1309.3529





Cites Work


Cited In (44)

Uses Software






This page was built for publication: An inexact successive quadratic approximation method for L-1 regularized optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q301652)