Second-order non-stationary online learning for regression
From MaRDI portal
Publication:5744809
zbMATH Open1351.68224arXiv1303.0140MaRDI QIDQ5744809FDOQ5744809
Authors: Edward Moroshko, Nina Vaits, Koby Crammer
Publication date: 19 February 2016
Full work available at URL: https://arxiv.org/abs/1303.0140
Recommendations
- Re-adapting the regularization of weights for non-stationary regression
- Adaptive and self-confident on-line learning algorithms
- Online regression with varying Gaussians and non-identical distributions
- A generalized online mirror descent with applications to classification and regression
- Weighted last-step min-max algorithm with improved sub-logarithmic regret
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Inference from stochastic processes and prediction (62M20)
Cited In (9)
- Nonstationary online convex optimization with multiple predictions
- Re-adapting the regularization of weights for non-stationary regression
- An upper bound for aggregating algorithm for regression with changing dependencies
- Renewable composite quantile method and algorithm for nonparametric models with streaming data
- Recursive ridge regression using second-order stochastic algorithms
- Non-stationary stochastic optimization
- Second-Order Online Nonconvex Optimization
- Online renewable smooth quantile regression
- One step back, two steps forward: interference and learning in recurrent neural networks
Uses Software
This page was built for publication: Second-order non-stationary online learning for regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5744809)