Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee

From MaRDI portal
Publication:331671

DOI10.1007/S10994-016-5578-4zbMATH Open1386.68130arXiv1510.07169OpenAlexW2108943779WikidataQ62047569 ScholiaQ62047569MaRDI QIDQ331671FDOQ331671

Emanuele Frandi, Ricardo Ñanculef, Johan A. K. Suykens, Stefano Lodi, Claudio Sartori

Publication date: 27 October 2016

Published in: Machine Learning (Search for Journal in Brave)

Abstract: Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems arising in the field of Machine Learning. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems where computing a sparse model is desirable. In this paper, we present a high-performance implementation of the FW method tailored to solve large-scale Lasso regression problems, based on a randomized iteration, and prove that the convergence guarantees of the standard FW method are preserved in the stochastic setting. We show experimentally that our algorithm outperforms several existing state of the art methods, including the Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso solvers), on several benchmark datasets with a very large number of features, without sacrificing the accuracy of the model. Our results illustrate that the algorithm is able to generate the complete regularization path on problems of size up to four million variables in less than one minute.


Full work available at URL: https://arxiv.org/abs/1510.07169




Recommendations




Cites Work


Cited In (5)

Uses Software





This page was built for publication: Fast and scalable Lasso via stochastic Frank-Wolfe methods with a convergence guarantee

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q331671)