scientific article; zbMATH DE number 6253934
From MaRDI portal
Publication:5396673
zbMath1280.68164MaRDI QIDQ5396673
John C. Duchi, Elad Hazan, Yoram Singer
Publication date: 3 February 2014
Full work available at URL: http://www.jmlr.org/papers/v12/duchi11a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15)
Related Items
Mini-Batch Metropolis–Hastings With Reversible SGLD Proposal, Bayesian Projected Calibration of Computer Models, Artificial-neural-network-based nonlinear algebraic models for large-eddy simulation of compressible wall-bounded turbulence, Linearly Constrained Nonsmooth Optimization for Training Autoencoders, Subgradient ellipsoid method for nonsmooth convex problems, Combining gradient optimization and machine learning methods for inverse problems in layered heterogeneous media, A stochastic gradient method for a class of nonlinear PDE-constrained optimal control problems under uncertainty, SCORE: approximating curvature information under self-concordant regularization, Block-cyclic stochastic coordinate descent for deep neural networks, Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation, How to handle noisy labels for robust learning from uncertainty, A distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence training, Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization, Stratified Cox models with time‐varying effects for national kidney transplant patients: A new blockwise steepest ascent method, Stochastic momentum methods for non-convex learning without bounded assumptions, Multivariate online regression analysis with heterogeneous streaming data, Graph deep learning model for mapping mineral prospectivity, An indefinite proximal subgradient-based algorithm for nonsmooth composite optimization, A mini-batch stochastic conjugate gradient algorithm with variance reduction, Comprehensive study of variational Bayes classification for dense deep neural networks, Three ways to solve partial differential equations with neural networks — A review, Time series analysis and prediction of nonlinear systems with ensemble learning framework applied to deep learning neural networks, Efficient learning rate adaptation based on hierarchical optimization approach, A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks, Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training, Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean, Eigenvalue-Corrected Natural Gradient Based on a New Approximation, Convergence of the RMSProp deep learning method with penalty for nonconvex optimization, A stepwise physics‐informed neural network for solving large deformation problems of hypoelastic materials, Semi-implicit back propagation, Variational inference for Bayesian bridge regression, Adaptive stochastic gradient descent for optimal control of parabolic equations with random parameters, Facial Action Units Detection to Identify Interest Emotion: An Application of Deep Learning, Parallel and distributed asynchronous adaptive stochastic gradient methods, Speeding-up one-versus-all training for extreme classification via mean-separating initialization, SVRG meets AdaGrad: painless variance reduction, Variance reduction on general adaptive stochastic mirror descent, Optimistic optimisation of composite objective with exponentiated update, Black Box Variational Bayesian Model Averaging, A noise-based stabilizer for convolutional neural networks, Online Covariance Matrix Estimation in Stochastic Gradient Descent, Batching Adaptive Variance Reduction, Error convergence and engineering-guided hyperparameter search of PINNs: towards optimized I-FENN performance, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, Adaptive step size rules for stochastic optimization in large-scale learning, Addressing discontinuous root-finding for subsequent differentiability in machine learning, inverse problems, and control, Efficient approximations of the fisher matrix in neural networks using kronecker product singular value decomposition, Projective Integral Updates for High-Dimensional Variational Inference, Online decision making for trading wind energy, A new taxonomy of global optimization algorithms, Variable separated physics-informed neural networks based on adaptive weighted loss functions for blood flow model, Theoretical analysis of Adam using hyperparameters close to one without Lipschitz smoothness, Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning, Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations, On the Adaptivity of Stochastic Gradient-Based Optimization, The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models, A Continuous-Time Analysis of Distributed Stochastic Gradient, An Infinite Restricted Boltzmann Machine, Nonconvex Policy Search Using Variational Inequalities, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review, Accelerating Sparse Recovery by Reducing Chatter, Convergence and Dynamical Behavior of the ADAM Algorithm for Nonconvex Stochastic Optimization, Convergence of Newton-MR under Inexact Hessian Information, Why Does Large Batch Training Result in Poor Generalization? A Comprehensive Explanation and a Better Strategy from the Viewpoint of Stochastic Optimization, $l_p$ Regularization for Ensemble Kalman Inversion, PNKH-B: A Projected Newton--Krylov Method for Large-Scale Bound-Constrained Optimization, A Distributed Optimal Control Problem with Averaged Stochastic Gradient Descent, Dying ReLU and Initialization: Theory and Numerical Examples, Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization, Ensemble Kalman inversion: a derivative-free technique for machine learning tasks, Unnamed Item, Scalable estimation strategies based on stochastic approximations: classical results and new insights, Stochastic sub-sampled Newton method with variance reduction, Machine Learning in Adaptive Domain Decomposition Methods---Predicting the Geometric Location of Constraints, Adaptive sequential machine learning, A Stochastic Line Search Method with Expected Complexity Analysis, Unnamed Item, Abstract convergence theorem for quasi-convex optimization problems with applications, An Inertial Newton Algorithm for Deep Learning, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, Unnamed Item, Unnamed Item, Unnamed Item, Entropy-SGD: biasing gradient descent into wide valleys, Conformal symplectic and relativistic optimization, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Stochastic proximal linear method for structured non-convex problems, An accelerated communication-efficient primal-dual optimization framework for structured machine learning, Joint Online Parameter Estimation and Optimal Sensor Placement for the Partially Observed Stochastic Advection-Diffusion Equation, Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems, Adaptive online distributed optimization in dynamic environments, An Adaptive Gradient Method with Energy and Momentum, Multi-Objective Optimization of Laminated Functionally Graded Carbon Nanotube-Reinforced Composite Plates Using Deep Feedforward Neural Networks-NSGAII Algorithm, An inexact first-order method for constrained nonlinear optimization, A fully stochastic second-order trust region method, Stochastic Optimization for Dynamic Pricing, Adaptive Gradient-Free Method for Stochastic Optimization, Random Batch Methods for Classical and Quantum Interacting Particle Systems and Statistical Samplings, Adaptive Quadratically Regularized Newton Method for Riemannian Optimization, Quasi-Newton methods for machine learning: forget the past, just sample, A Stochastic Second-Order Generalized Estimating Equations Approach for Estimating Association Parameters, Convergence acceleration of ensemble Kalman inversion in nonlinear settings, Stochastic Methods for Composite and Weakly Convex Optimization Problems, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, A Consensus-Based Global Optimization Method with Adaptive Momentum Estimation, Unnamed Item, Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models, Asymptotic optimality in stochastic optimization, A survey of deep network techniques all classifiers can adopt, Global and local optimization in identification of parabolic systems, Detecting Product Adoption Intentions via Multiview Deep Learning, Privacy-preserving distributed deep learning based on secret sharing, Stochastic quasi-Newton with line-search regularisation, Unbiased MLMC-based Variational Bayes for Likelihood-Free Inference, A hybrid MGA-MSGD ANN training approach for approximate solution of linear elliptic PDEs, Scaling up Bayesian variational inference using distributed computing clusters, Kernel-based online gradient descent using distributed approach, Big data driven order-up-to level model: application of machine learning, Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent, Nonlinear Reduced DNN Models for State Estimation, slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks, Application of Monte Carlo stochastic optimization (MOST) to deep learning, Online strongly convex optimization with unknown delays, Convergence rates of accelerated proximal gradient algorithms under independent noise, Integrated finite element neural network (I-FENN) for non-local continuum damage mechanics, Primal-Dual Algorithms for Optimization with Stochastic Dominance, Quantum locally linear embedding for nonlinear dimensionality reduction, Prediction of permeability of porous media using optimized convolutional neural networks, Unnamed Item, Unnamed Item, Unnamed Item, Semi-supervised online structure learning for composite event recognition, Controlling unknown linear dynamics with bounded multiplicative regret, Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions, Physics-informed neural networks based on adaptive weighted loss functions for Hamilton-Jacobi equations, Barren plateaus from learning scramblers with local cost functions, OFFO minimization algorithms for second-order optimality and their complexity, On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions, Optimization for deep learning: an overview, A review on deep learning in medical image reconstruction, How can machine learning and optimization help each other better?, Bi-fidelity stochastic gradient descent for structural optimization under uncertainty, Machine learning for fast and reliable solution of time-dependent differential equations, Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders, Coercing machine learning to output physically accurate results, Multilevel Stochastic Gradient Methods for Nested Composition Optimization, A linearly convergent stochastic recursive gradient method for convex optimization, Parallel Optimization Techniques for Machine Learning, Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance, A nonlocal physics-informed deep learning framework using the peridynamic differential operator, Deep autoencoders for physics-constrained data-driven nonlinear materials modeling, Optimization Methods for Large-Scale Machine Learning, Deep learning for quantile regression under right censoring: deepquantreg, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Variable Metric Inexact Line-Search-Based Methods for Nonsmooth Optimization, Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Scale-invariant unconstrained online learning, Accelerating deep neural network training with inconsistent stochastic gradient descent, Control-based algorithms for high dimensional online learning, Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster, Lagrangian relaxation of the generic materials and operations planning model, Robust and sparse regression in generalized linear model by stochastic optimization, Stochastic gradient Langevin dynamics with adaptive drifts, Computational mechanics enhanced by deep learning, Scale-Free Algorithms for Online Linear Optimization, Deep relaxation: partial differential equations for optimizing deep neural networks, Probabilistic Line Searches for Stochastic Optimization, Knowledge Graph Completion via Complex Tensor Factorization, Efficient learning with robust gradient descent, Unnamed Item, Information-Theoretic Representation Learning for Positive-Unlabeled Classification, Joint Structure and Parameter Optimization of Multiobjective Sparse Neural Network, Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses, Unnamed Item, Adaptive Hamiltonian Variational Integrators and Applications to Symplectic Accelerated Optimization, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, A globally convergent incremental Newton method, High generalization performance structured self-attention model for knapsack problem, Unbiased MLMC Stochastic Gradient-Based Optimization of Bayesian Experimental Designs, An efficient neural network method with plane wave activation functions for solving Helmholtz equation, Structure probing neural network deflation, A deep learning algorithm for high-dimensional exploratory item factor analysis, Fast estimation of multivariate spatiotemporal Hawkes processes and network reconstruction, Bregman proximal gradient algorithms for deep matrix factorization, Stronger data poisoning attacks break data sanitization defenses, On better training the infinite restricted Boltzmann machines, On data preconditioning for regularized loss minimization, SelectNet: self-paced learning for high-dimensional partial differential equations, Learning probabilistic termination proofs, Deep learning of CMB radiation temperature, On obtaining sparse semantic solutions for inverse problems, control, and neural network training, An augmented Lagrangian model for signal segmentation, Stochastic approximation method using diagonal positive-definite matrices for convex optimization with fixed point constraints, A general neural particle method for hydrodynamics modeling, Background information of deep learning for structural engineering, Adaptive primal-dual stochastic gradient method for expectation-constrained convex stochastic programs, Improved architectures and training algorithms for deep operator networks, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Finite-sum smooth optimization with SARAH, Stochastic optimization using a trust-region method and random models, Generalized mirror descents in congestion games, Riemannian stochastic fixed point optimization algorithm, Online active classification via margin-based and feature-based label queries, Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection, Physics-informed distribution transformers via molecular dynamics and deep neural networks, Laplacian smoothing gradient descent, The computational asymptotics of Gaussian variational inference and the Laplace approximation, Block layer decomposition schemes for training deep neural networks, Adaptive regularization of weight vectors, A framework for parallel and distributed training of neural networks, Nonlinear approximation via compositions, Data science applications to string theory, Feature-aware regularization for sparse online learning, Novel convolutional neural network architecture for improved pulmonary nodule classification on computed tomography, Inference, learning and attention mechanisms that exploit and preserve sparsity in CNNs, Inexact proximal stochastic gradient method for convex composite optimization, Monte Carlo co-ordinate ascent variational inference, The mechanism of additive composition, Correctness of automatic differentiation via diffeologies and categorical gluing, A machine learning approach for efficient uncertainty quantification using multiscale methods, Gaussian variational approximation with sparse precision matrices, Scale-free online learning, A heuristic adaptive fast gradient method in stochastic optimization problems, Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization, Ensemble clustering for efficient robust optimization of naturally fractured reservoirs, Robust unsupervised domain adaptation for neural networks via moment alignment, A unified framework for stochastic optimization, A brief introduction to manifold optimization, Weighted last-step min-max algorithm with improved sub-logarithmic regret, A generalized online mirror descent with applications to classification and regression, An efficient approach to diagnose brain tumors through deep CNN, Hyperlink regression via Bregman divergence, Stochastic gradient descent with Polyak's learning rate, Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks, Minimizing finite sums with the stochastic average gradient, An online-learning-based evolutionary many-objective algorithm, Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification, Selection dynamics for deep neural networks, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, PPINN: parareal physics-informed neural network for time-dependent PDEs, A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics, Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches, A Stochastic Quasi-Newton Method for Large-Scale Optimization, A selective overview of deep learning, p-kernel Stein variational gradient descent for data assimilation and history matching, Data-driven algorithm selection and tuning in optimization and signal processing, Incremental without replacement sampling in nonconvex optimization, Synthetic-aperture radar image based positioning in GPS-denied environments using deep cosine similarity neural networks, Material optimization of tri-directional functionally graded plates by using deep neural network and isogeometric multimesh design approach, Machine learning to approximate free-surface Green's function and its application in wave-body interactions, Efficient stochastic optimisation by unadjusted Langevin Monte Carlo. Application to maximum marginal likelihood and empirical Bayesian estimation, Particle-based energetic variational inference, Analysis of stochastic gradient descent in continuous time, Tight bounds on the mutual coherence of sensing matrices for Wigner d-functions on regular grids, Sequential convergence of AdaGrad algorithm for smooth convex optimization, AdaGrad, Learning context-dependent choice functions, Adaptive optimization with periodic dither signals, Fast selection of nonlinear mixed effect models using penalized likelihood, Stochastic Markov gradient descent and training low-bit neural networks, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, Parallel subgradient algorithm with block dual decomposition for large-scale optimization, Weak adversarial networks for high-dimensional partial differential equations, An adaptive high order method for finding third-order critical points of nonconvex optimization, A stochastic version of Stein variational gradient descent for efficient sampling, SRKCD: a stabilized Runge-Kutta method for stochastic optimization, Reinforcement learning for the knapsack problem, Quantifying scrambling in quantum neural networks, An application of the splitting-up method for the computation of a neural network representation for the solution for the filtering equations, SABRINA: a stochastic subspace majorization-minimization algorithm, Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization, An adaptive Polyak heavy-ball method, Tackling algorithmic bias in neural-network classifiers using Wasserstein-2 regularization, Variational learning the SDC quantum protocol with gradient-based optimization, Variational inference with vine copulas: an efficient approach for Bayesian computer model calibration, On stochastic accelerated gradient with convergence rate, An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems, Bridging the gap: machine learning to resolve improperly modeled dynamics, A hybrid stochastic optimization framework for composite nonconvex optimization, Preconditioning meets biased compression for efficient distributed optimization, A control theoretic framework for adaptive gradient optimizers, Deep learning approach to Hubble parameter, Smooth monotone stochastic variational inequalities and saddle point problems: a survey, Time-adaptive Lagrangian variational integrators for accelerated optimization, Stochastic perturbation of subgradient algorithm for nonconvex deep neural networks, Accelerated doubly stochastic gradient descent for tensor CP decomposition, Categorical foundations of gradient-based learning, Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions, Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient, Optimization of the closed-loop controller of a discontinuous capsule drive using a neural network, First-order methods for convex optimization, Online renewable smooth quantile regression, The limited-memory recursive variational Gaussian approximation (L-RVGA), Optimization Design of Laminated Functionally Carbon Nanotube-Reinforced Composite Plates Using Deep Neural Networks and Differential Evolution, A variable metric and Nesterov extrapolated proximal DCA with backtracking for a composite DC program, Bayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input Data, Riemannian Natural Gradient Methods, Stochastic variational inference for GARCH models, Dynamic regret of adaptive gradient methods for strongly convex problems, Estimation and inference by stochastic optimization, The buffered optimization methods for online transfer function identification employed on DEAP actuator, Momentum‐innovation recursive least squares identification algorithm for a servo turntable system based on the output error model, GANs training: A game and stochastic control approach, Adaptive proximal SGD based on new estimating sequences for sparser ERM, Stochastic gradient descent: where optimization meets machine learning, Open issues and recent advances in DC programming and DCA, Physical informed neural networks with soft and hard boundary constraints for solving advection-diffusion equations using Fourier expansions, SGEM: stochastic gradient with energy and momentum, Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions, Recent Theoretical Advances in Non-Convex Optimization