Second-order non-stationary online learning for regression
From MaRDI portal
Publication:5744809
Recommendations
- Re-adapting the regularization of weights for non-stationary regression
- Adaptive and self-confident on-line learning algorithms
- Online regression with varying Gaussians and non-identical distributions
- A generalized online mirror descent with applications to classification and regression
- Weighted last-step min-max algorithm with improved sub-logarithmic regret
Cited in
(9)- Nonstationary online convex optimization with multiple predictions
- Non-stationary stochastic optimization
- One step back, two steps forward: interference and learning in recurrent neural networks
- An upper bound for aggregating algorithm for regression with changing dependencies
- Online renewable smooth quantile regression
- Second-Order Online Nonconvex Optimization
- Re-adapting the regularization of weights for non-stationary regression
- Renewable composite quantile method and algorithm for nonparametric models with streaming data
- Recursive ridge regression using second-order stochastic algorithms
This page was built for publication: Second-order non-stationary online learning for regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5744809)