ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
From MaRDI portal
Publication:2179079
DOI10.1016/J.NEUNET.2017.07.007zbMATH Open1434.68527OpenAlexW2751504024WikidataQ47672110 ScholiaQ47672110MaRDI QIDQ2179079FDOQ2179079
Authors: Paweł Wawrzyński
Publication date: 12 May 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2017.07.007
Recommendations
- scientific article; zbMATH DE number 2247821
- Adaptive Sequential Stochastic Optimization
- Adaptive subgradient methods for online learning and stochastic optimization
- Learning automata and stochastic optimization
- Continuous action set learning automata for stochastic optimization
- scientific article; zbMATH DE number 1569106
- On the adaptivity of stochastic gradient-based optimization
- scientific article; zbMATH DE number 3959133
Cites Work
- Reducing the Dimensionality of Data with Neural Networks
- Title not available (Why is that?)
- A Stochastic Approximation Method
- SGD-QN: careful quasi-Newton stochastic gradient descent
- An optimal method for stochastic composite optimization
- Title not available (Why is that?)
- Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming
- Some methods of speeding up the convergence of iteration methods
- Autonomous reinforcement learning with experience replay
- Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
Cited In (3)
Uses Software
This page was built for publication: ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2179079)