Adaptive sequential machine learning

From MaRDI portal
Publication:5215364

DOI10.1080/07474946.2019.1686889zbMATH Open1429.68241arXiv1904.02773OpenAlexW3003901708MaRDI QIDQ5215364FDOQ5215364


Authors: Craig Wilson, Yuheng Bu, Venugopal V. Veeravalli Edit this on Wikidata


Publication date: 10 February 2020

Published in: Sequential Analysis (Search for Journal in Brave)

Abstract: A framework previously introduced in [3] for solving a sequence of stochastic optimization problems with bounded changes in the minimizers is extended and applied to machine learning problems such as regression and classification. The stochastic optimization problems arising in these machine learning problems is solved using algorithms such as stochastic gradient descent (SGD). A method based on estimates of the change in the minimizers and properties of the optimization algorithm is introduced for adaptively selecting the number of samples at each time step to ensure that the excess risk, i.e., the expected gap between the loss achieved by the approximate minimizer produced by the optimization algorithm and the exact minimizer, does not exceed a target level. A bound is developed to show that the estimate of the change in the minimizers is non-trivial provided that the excess risk is small enough. Extensions relevant to the machine learning setting are considered, including a cost-based approach to select the number of samples with a cost budget over a fixed horizon, and an approach to applying cross-validation for model selection. Finally, experiments with synthetic and real data are used to validate the algorithms.


Full work available at URL: https://arxiv.org/abs/1904.02773




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: Adaptive sequential machine learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5215364)