Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks
DOI10.1007/S40314-023-02307-9OpenAlexW4367603340MaRDI QIDQ6161107FDOQ6161107
Authors: Abdelkrim El Mouatasim, Eduardo Souza de Cursi, Rachid Ellaia
Publication date: 2 June 2023
Published in: Computational and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s40314-023-02307-9
Recommendations
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
- Convergence of stochastic gradient descent in deep neural network
- A stochastic gradient method with variance control and variable learning rate for deep learning
- AdaLo: adaptive learning rate optimizer with loss for classification
- Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
nonconvex nonsmooth optimizationstochastic perturbationsubgradient algorithmimage classificationlearning ratedeep neural networks and CNN
Learning and adaptive systems in artificial intelligence (68T05) Artificial neural networks and deep learning (68T07) Nonconvex programming, global optimization (90C26)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Title not available (Why is that?)
- Learning deep architectures for AI
- Pattern recognition and machine learning.
- Pattern classification.
- Nondifferentiable optimization. Transl. from the Russian by Tetsushi Sasagawa
- Subgradient method for nonconvex nonsmooth optimization
- New variable-metric algorithms for nondifferentiable optimization problems
- Global optimization by random perturbation of the gradient method with a fixed parameter
- Implementation of reduced gradient with bisection algorithms for non-convex optimization problem via stochastic perturbation
- Random perturbation of the projected variable metric method for nonsmooth nonconvex optimization problems with linear constraints
- Random perturbation of the variable metric method for unconstrained nonsmooth nonconvex optimization
- Stochastic perturbation of reduced gradient \& GRG methods for nonconvex programming problems
- Multicomposite nonconvex optimization for training deep neural networks
- Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments
- An efficient gradient method with approximately optimal stepsize based on tensor model for unconstrained optimization
Cited In (5)
- Title not available (Why is that?)
- Subgradient-based feedback neural networks for non-differentiable convex optimization problems
- Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms
- A stochastic subgradient method for distributionally robust non-convex and non-smooth learning
- Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks
This page was built for publication: Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6161107)