Optimization methods for large-scale machine learning

From MaRDI portal
Publication:4641709

DOI10.1137/16M1080173zbMATH Open1397.65085arXiv1606.04838OpenAlexW2963433607WikidataQ89144557 ScholiaQ89144557MaRDI QIDQ4641709FDOQ4641709


Authors: Léon Bottou, Jorge Nocedal, Frank E. Curtis Edit this on Wikidata


Publication date: 18 May 2018

Published in: SIAM Review (Search for Journal in Brave)

Abstract: This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.


Full work available at URL: https://arxiv.org/abs/1606.04838




Recommendations




Cites Work


Cited In (only showing first 100 items - show all)

Uses Software





This page was built for publication: Optimization methods for large-scale machine learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4641709)