AdaGrad
From MaRDI portal
Software:33997
swMATH22202MaRDI QIDQ33997FDOQ33997
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Title not available (Why is that?)
- An adaptive Polyak heavy-ball method
- Tackling algorithmic bias in neural-network classifiers using Wasserstein-2 regularization
- Variational learning the SDC quantum protocol with gradient-based optimization
- An efficient neural network method with plane wave activation functions for solving Helmholtz equation
- Statistics of robust optimization: a generalized empirical likelihood approach
- Block layer decomposition schemes for training deep neural networks
- Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography
- On stochastic accelerated gradient with convergence rate
- An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
- An online-learning-based evolutionary many-objective algorithm
- A nonlocal physics-informed deep learning framework using the peridynamic differential operator
- Stochastic quasi-Newton with line-search regularisation
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Stochastic proximal linear method for structured non-convex problems
- Probabilistic line searches for stochastic optimization
- Nonconvex policy search using variational inequalities
- PPINN: parareal physics-informed neural network for time-dependent PDEs
- A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- The implicit bias of gradient descent on separable data
- A consensus-based global optimization method with adaptive momentum estimation
- Robust unsupervised domain adaptation for neural networks via moment alignment
- A brief introduction to manifold optimization
- Quantum locally linear embedding for nonlinear dimensionality reduction
- Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- A selective overview of deep learning
- Semi-supervised online structure learning for composite event recognition
- p-kernel Stein variational gradient descent for data assimilation and history matching
- Data-driven algorithm selection and tuning in optimization and signal processing
- Nonlinear approximation via compositions
- Bi-fidelity stochastic gradient descent for structural optimization under uncertainty
- Incremental without replacement sampling in nonconvex optimization
- Synthetic-aperture radar image based positioning in GPS-denied environments using deep cosine similarity neural networks
- Material optimization of tri-directional functionally graded plates by using deep neural network and isogeometric multimesh design approach
- A fully stochastic second-order trust region method
- Machine learning to approximate free-surface Green's function and its application in wave-body interactions
- Sequential convergence of AdaGrad algorithm for smooth convex optimization
- Lagrangian relaxation of the generic materials and operations planning model
- Learning context-dependent choice functions
- Adaptive optimization with periodic dither signals
- Fast selection of nonlinear mixed effect models using penalized likelihood
- Stochastic Markov gradient descent and training low-bit neural networks
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- Linear algebra and optimization for machine learning. A textbook
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Scheduled restart momentum for accelerated stochastic gradient descent
- Coercing machine learning to output physically accurate results
- Parallel subgradient algorithm with block dual decomposition for large-scale optimization
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Convergence of Newton-MR under inexact Hessian information
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- Reinforcement learning for the knapsack problem
- Adaptive Hamiltonian variational integrators and applications to symplectic accelerated optimization
- Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs
- Optimization for deep learning: an overview
- Quantifying scrambling in quantum neural networks
- An application of the splitting-up method for the computation of a neural network representation for the solution for the filtering equations
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- SABRINA: a stochastic subspace majorization-minimization algorithm
- Primal-dual algorithms for optimization with stochastic dominance
- Scale-free online learning
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Stochastic gradient descent with Polyak's learning rate
- Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks
- Distribution-specific hardness of learning neural networks
- Adaptivity of stochastic gradient methods for nonconvex optimization
- Learning probabilistic termination proofs
- Deep autoencoders for physics-constrained data-driven nonlinear materials modeling
- Active learning for cost-sensitive classification
- Computational mechanics enhanced by deep learning
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Scalable learning of Bayesian network classifiers
- Joint online parameter estimation and optimal sensor placement for the partially observed stochastic advection-diffusion equation
- A machine learning approach for efficient uncertainty quantification using multiscale methods
- Second-order stochastic optimization for machine learning in linear time
- Incremental majorization-minimization optimization with application to large-scale machine learning
- On data preconditioning for regularized loss minimization
- Robust and sparse regression in generalized linear model by stochastic optimization
- Optimization methods for large-scale machine learning
- Uncovering causality from multivariate Hawkes integrated cumulants
- Selection dynamics for deep neural networks
- A globally convergent incremental Newton method
- On the inductive bias of dropout
- An infinite restricted Boltzmann machine
- Maximum principle based algorithms for deep learning
- Deep convolutional neural networks for image classification: a comprehensive review
- Machine learning for fast and reliable solution of time-dependent differential equations
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- OFFO minimization algorithms for second-order optimality and their complexity
- Adaptive regularization of weight vectors
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Scaling up Bayesian variational inference using distributed computing clusters
- Domain-adversarial training of neural networks
This page was built for software: AdaGrad