AdaGrad
From MaRDI portal
Software:33997
swMATH22202MaRDI QIDQ33997FDOQ33997
Author name not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Convergence and Dynamical Behavior of the ADAM Algorithm for Nonconvex Stochastic Optimization
- Title not available (Why is that?)
- An adaptive Polyak heavy-ball method
- Tackling algorithmic bias in neural-network classifiers using Wasserstein-2 regularization
- Variational learning the SDC quantum protocol with gradient-based optimization
- An efficient neural network method with plane wave activation functions for solving Helmholtz equation
- Block layer decomposition schemes for training deep neural networks
- Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography
- On stochastic accelerated gradient with convergence rate
- An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
- An online-learning-based evolutionary many-objective algorithm
- A nonlocal physics-informed deep learning framework using the peridynamic differential operator
- A Consensus-Based Global Optimization Method with Adaptive Momentum Estimation
- Stochastic quasi-Newton with line-search regularisation
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- Stochastic proximal linear method for structured non-convex problems
- PPINN: parareal physics-informed neural network for time-dependent PDEs
- A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics
- Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches
- Robust unsupervised domain adaptation for neural networks via moment alignment
- A brief introduction to manifold optimization
- Quantum locally linear embedding for nonlinear dimensionality reduction
- Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
- Linear Algebra and Optimization for Machine Learning
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
- Convergence rates of accelerated proximal gradient algorithms under independent noise
- A selective overview of deep learning
- Convergence of Newton-MR under Inexact Hessian Information
- Semi-supervised online structure learning for composite event recognition
- Adaptive Hamiltonian Variational Integrators and Applications to Symplectic Accelerated Optimization
- Unbiased MLMC Stochastic Gradient-Based Optimization of Bayesian Experimental Designs
- p-kernel Stein variational gradient descent for data assimilation and history matching
- Data-driven algorithm selection and tuning in optimization and signal processing
- Nonlinear approximation via compositions
- Bi-fidelity stochastic gradient descent for structural optimization under uncertainty
- Incremental without replacement sampling in nonconvex optimization
- Synthetic-aperture radar image based positioning in GPS-denied environments using deep cosine similarity neural networks
- Material optimization of tri-directional functionally graded plates by using deep neural network and isogeometric multimesh design approach
- Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach
- A fully stochastic second-order trust region method
- Machine learning to approximate free-surface Green's function and its application in wave-body interactions
- Sequential convergence of AdaGrad algorithm for smooth convex optimization
- Lagrangian relaxation of the generic materials and operations planning model
- Learning context-dependent choice functions
- Adaptive optimization with periodic dither signals
- Fast selection of nonlinear mixed effect models using penalized likelihood
- Stochastic Markov gradient descent and training low-bit neural networks
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- Probabilistic Line Searches for Stochastic Optimization
- Coercing machine learning to output physically accurate results
- Parallel subgradient algorithm with block dual decomposition for large-scale optimization
- An adaptive high order method for finding third-order critical points of nonconvex optimization
- Accelerating deep neural network training with inconsistent stochastic gradient descent
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- Reinforcement learning for the knapsack problem
- Optimization for deep learning: an overview
- Quantifying scrambling in quantum neural networks
- An application of the splitting-up method for the computation of a neural network representation for the solution for the filtering equations
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- SABRINA: a stochastic subspace majorization-minimization algorithm
- Nonconvex Policy Search Using Variational Inequalities
- Title not available (Why is that?)
- Scale-free online learning
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- Stochastic gradient descent with Polyak's learning rate
- Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models
- Learning probabilistic termination proofs
- Deep autoencoders for physics-constrained data-driven nonlinear materials modeling
- Computational mechanics enhanced by deep learning
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Scalable learning of Bayesian network classifiers
- Scale-Free Algorithms for Online Linear Optimization
- A machine learning approach for efficient uncertainty quantification using multiscale methods
- On data preconditioning for regularized loss minimization
- Robust and sparse regression in generalized linear model by stochastic optimization
- Selection dynamics for deep neural networks
- A globally convergent incremental Newton method
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- On the inductive bias of dropout
- Primal-Dual Algorithms for Optimization with Stochastic Dominance
- Machine learning for fast and reliable solution of time-dependent differential equations
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- OFFO minimization algorithms for second-order optimality and their complexity
- Adaptive regularization of weight vectors
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Scaling up Bayesian variational inference using distributed computing clusters
- Domain-adversarial training of neural networks
- Background information of deep learning for structural engineering
- Variational inference with vine copulas: an efficient approach for Bayesian computer model calibration
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Barren plateaus from learning scramblers with local cost functions
- Monte Carlo co-ordinate ascent variational inference
- A Stochastic Line Search Method with Expected Complexity Analysis
- Weighted last-step min-max algorithm with improved sub-logarithmic regret
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
This page was built for software: AdaGrad