Adaptive sequential machine learning
From MaRDI portal
Publication:5215364
Abstract: A framework previously introduced in [3] for solving a sequence of stochastic optimization problems with bounded changes in the minimizers is extended and applied to machine learning problems such as regression and classification. The stochastic optimization problems arising in these machine learning problems is solved using algorithms such as stochastic gradient descent (SGD). A method based on estimates of the change in the minimizers and properties of the optimization algorithm is introduced for adaptively selecting the number of samples at each time step to ensure that the excess risk, i.e., the expected gap between the loss achieved by the approximate minimizer produced by the optimization algorithm and the exact minimizer, does not exceed a target level. A bound is developed to show that the estimate of the change in the minimizers is non-trivial provided that the excess risk is small enough. Extensions relevant to the machine learning setting are considered, including a cost-based approach to select the number of samples with a cost budget over a fixed horizon, and an approach to applying cross-validation for model selection. Finally, experiments with synthetic and real data are used to validate the algorithms.
Recommendations
- Adaptive sampling for incremental optimization using stochastic gradient descent
- On the adaptivity of stochastic gradient-based optimization
- Adaptive subgradient methods for online learning and stochastic optimization
- scientific article; zbMATH DE number 1827088
- Adaptive and self-confident on-line learning algorithms
Cites work
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 2090195 (Why is no real title available?)
- Adaptive Sequential Stochastic Optimization
- Adaptive subgradient methods for online learning and stochastic optimization
- Dual averaging methods for regularized stochastic learning and online optimization
- Efficient online and batch learning using forward backward splitting
- Foundations of machine learning
- Implicit Functions and Solution Mappings
- Logarithmic regret algorithms for online convex optimization
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- Prediction, Learning, and Games
- The elements of statistical learning. Data mining, inference, and prediction
Cited in
(3)
This page was built for publication: Adaptive sequential machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5215364)