AdaGrad
From MaRDI portal
swMATH22202MaRDI QIDQ33997FDOQ33997
Author name not available (Why is that?)
Official website: http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf
Cited In (only showing first 100 items - show all)
- Stochastic gradient descent with Polyak's learning rate
- Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks
- Distribution-specific hardness of learning neural networks
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Title not available (Why is that?)
- Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography
- Active learning for cost-sensitive classification
- An online-learning-based evolutionary many-objective algorithm
- Joint online parameter estimation and optimal sensor placement for the partially observed stochastic advection-diffusion equation
- Second-order stochastic optimization for machine learning in linear time
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Stochastic proximal linear method for structured non-convex problems
- Probabilistic line searches for stochastic optimization
- Nonconvex policy search using variational inequalities
- PPINN: parareal physics-informed neural network for time-dependent PDEs
- A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- The implicit bias of gradient descent on separable data
- A consensus-based global optimization method with adaptive momentum estimation
- Robust unsupervised domain adaptation for neural networks via moment alignment
- A brief introduction to manifold optimization
- Quantum locally linear embedding for nonlinear dimensionality reduction
- Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- A selective overview of deep learning
- Semi-supervised online structure learning for composite event recognition
- p-kernel Stein variational gradient descent for data assimilation and history matching
- Data-driven algorithm selection and tuning in optimization and signal processing
- Nonlinear approximation via compositions
- Bi-fidelity stochastic gradient descent for structural optimization under uncertainty
- Incremental without replacement sampling in nonconvex optimization
- Synthetic-aperture radar image based positioning in GPS-denied environments using deep cosine similarity neural networks
- Material optimization of tri-directional functionally graded plates by using deep neural network and isogeometric multimesh design approach
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Machine learning to approximate free-surface Green's function and its application in wave-body interactions
- Sequential convergence of AdaGrad algorithm for smooth convex optimization
- Lagrangian relaxation of the generic materials and operations planning model
- Learning context-dependent choice functions
- Adaptive optimization with periodic dither signals
- Fast selection of nonlinear mixed effect models using penalized likelihood
- Stochastic Markov gradient descent and training low-bit neural networks
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- DILAND
- Linear algebra and optimization for machine learning. A textbook
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Scheduled restart momentum for accelerated stochastic gradient descent
- Coercing machine learning to output physically accurate results
- Parallel subgradient algorithm with block dual decomposition for large-scale optimization
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- Control-based algorithms for high dimensional online learning
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Convergence of Newton-MR under inexact Hessian information
- Scale-free algorithms for online linear optimization
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- Reinforcement learning for the knapsack problem
- Ensemble clustering for efficient robust optimization of naturally fractured reservoirs
- Optimization for deep learning: an overview
- Hyperlink regression via Bregman divergence
- Quantifying scrambling in quantum neural networks
- An application of the splitting-up method for the computation of a neural network representation for the solution for the filtering equations
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- SABRINA: a stochastic subspace majorization-minimization algorithm
- A unified framework for stochastic optimization
- GLOB
- PIKAIA
- LIBSVM
- BCLS
- Spacemap
- DESMOND
- LMBM
- Theano
- NPtool
- RNNLIB
- RCV1
- Sparco
- UNLocBoX
- Penn Treebank
- Pegasos
- iPiano
- darch
- Jellyfish
- Orbifolder
- MNIST
- iPiasco
- ggks
- RestoVMFB_Lab
- Hybrid Stable Spline Toolbox
- ARock
- LIBMF
- DNdisorder
- Caffe
- CIFAR
- cuDNN
- fastFM
- DRAGON
- Seg3D
This page was built for software: AdaGrad