ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
From MaRDI portal
Publication:2179079
DOI10.1016/J.NEUNET.2017.07.007zbMATH Open1434.68527OpenAlexW2751504024WikidataQ47672110 ScholiaQ47672110MaRDI QIDQ2179079FDOQ2179079
Publication date: 12 May 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2017.07.007
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Reducing the Dimensionality of Data with Neural Networks
- A Stochastic Approximation Method
- SGD-QN: careful quasi-Newton stochastic gradient descent
- An optimal method for stochastic composite optimization
- Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming
- Some methods of speeding up the convergence of iteration methods
- Autonomous reinforcement learning with experience replay
- Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
Cited In (1)
Uses Software
Recommendations
- Learning automata and stochastic optimization ๐ ๐
- Continuous action set learning automata for stochastic optimization ๐ ๐
- Adaptive Sequential Stochastic Optimization ๐ ๐
- On the Adaptivity of Stochastic Gradient-Based Optimization ๐ ๐
- Title not available (Why is that?) ๐ ๐
- Title not available (Why is that?) ๐ ๐
- Title not available (Why is that?) ๐ ๐
- Title not available (Why is that?) ๐ ๐
This page was built for publication: ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2179079)