A control theoretic framework for adaptive gradient optimizers
From MaRDI portal
Publication:6152585
DOI10.1016/J.AUTOMATICA.2023.111466MaRDI QIDQ6152585FDOQ6152585
Authors: Kushal Chakrabarti, Nikhil Chopra
Publication date: 13 February 2024
Published in: Automatica (Search for Journal in Brave)
Recommendations
- AdaLo: adaptive learning rate optimizer with loss for classification
- Theoretical analysis of Adam using hyperparameters close to one without Lipschitz smoothness
- A modification of adaptive moment estimation (Adam) for machine learning
- Efficient learning rate adaptation based on hierarchical optimization approach
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
Learning and adaptive systems in artificial intelligence (68T05) Nonconvex programming, global optimization (90C26) Adaptive control/observation systems (93C40)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Title not available (Why is that?)
- Nonlinear systems.
- Robust control. Systems with uncertain physical parameters. In co-operation with A. Bartlett, D. Kaesbauer, W. Sienel, R. Steinhauser
- An LMI approach to constrained optimization with homogeneous forms
- Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- Resource-Aware Discretization of Accelerated Optimization Flows: The Heavy-Ball Dynamics Case
- Transient Growth of Accelerated Optimization Algorithms
Cited In (3)
This page was built for publication: A control theoretic framework for adaptive gradient optimizers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6152585)