Research on three-step accelerated gradient algorithm in deep learning
From MaRDI portal
Publication:5880102
DOI10.1080/24754269.2020.1846414OpenAlexW3110302316MaRDI QIDQ5880102FDOQ5880102
Authors: Yongqiang Lian, Shirong Zhou, Yincai Tang
Publication date: 7 March 2023
Published in: Statistical Theory and Related Fields (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/24754269.2020.1846414
Cites Work
- ggplot2
- Reducing the Dimensionality of Data with Neural Networks
- Pattern recognition and machine learning.
- Introductory lectures on convex optimization. A basic course.
- Machine learning. A probabilistic perspective
- Title not available (Why is that?)
- Title not available (Why is that?)
- Learning representations by back-propagating errors
- Title not available (Why is that?)
- An unconstrained optimization test functions collection
- Minimization of functions having Lipschitz continuous first partial derivatives
- An optimal method for stochastic composite optimization
- A logical calculus of the ideas immanent in nervous activity
- Title not available (Why is that?)
- Some methods of speeding up the convergence of iteration methods
- A Fast Learning Algorithm for Deep Belief Nets
- Machine learning. The art and science of algorithms that make sense of data.
- Some Algorithms for Minimizing a Function of Several Variables
- Title not available (Why is that?)
Uses Software
This page was built for publication: Research on three-step accelerated gradient algorithm in deep learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5880102)