Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks
From MaRDI portal
Publication:6161107
Recommendations
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
- Convergence of stochastic gradient descent in deep neural network
- A stochastic gradient method with variance control and variable learning rate for deep learning
- AdaLo: adaptive learning rate optimizer with loss for classification
- Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
Cites work
- scientific article; zbMATH DE number 6378127 (Why is no real title available?)
- Adaptive subgradient methods for online learning and stochastic optimization
- An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization
- Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
- Global optimization by random perturbation of the gradient method with a fixed parameter
- Implementation of reduced gradient with bisection algorithms for non-convex optimization problem via stochastic perturbation
- Learning deep architectures for AI
- Multicomposite nonconvex optimization for training deep neural networks
- New variable-metric algorithms for nondifferentiable optimization problems
- Nondifferentiable optimization. Transl. from the Russian by Tetsushi Sasagawa
- Pattern classification.
- Pattern recognition and machine learning.
- Random perturbation of the projected variable metric method for nonsmooth nonconvex optimization problems with linear constraints
- Random perturbation of the variable metric method for unconstrained nonsmooth nonconvex optimization
- Stochastic perturbation of reduced gradient \& GRG methods for nonconvex programming problems
- Subgradient method for nonconvex nonsmooth optimization
Cited in
(5)- scientific article; zbMATH DE number 2186223 (Why is no real title available?)
- Subgradient-based feedback neural networks for non-differentiable convex optimization problems
- Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
- Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks
This page was built for publication: Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6161107)