scientific article; zbMATH DE number 6860821
From MaRDI portal
Publication:4637040
zbMath1441.90115arXiv1602.03943MaRDI QIDQ4637040
Elad Hazan, Brian Bullins, Naman Agarwal
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1602.03943
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (23)
Stronger data poisoning attacks break data sanitization defenses ⋮ Regularization via Mass Transportation ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ Unnamed Item ⋮ Sketched Newton--Raphson ⋮ Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation ⋮ Revisiting the fragility of influence functions ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ Unnamed Item ⋮ Hessian averaging in stochastic Newton methods achieves superlinear convergence ⋮ On pseudoinverse-free block maximum residual nonlinear Kaczmarz method for solving large-scale nonlinear system of equations ⋮ Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions ⋮ An investigation of Newton-Sketch and subsampled Newton methods ⋮ Sub-sampled Newton methods ⋮ Optimization Methods for Large-Scale Machine Learning ⋮ Unnamed Item ⋮ Stochastic proximal quasi-Newton methods for non-convex composite optimization ⋮ Stochastic sub-sampled Newton method with variance reduction ⋮ Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization ⋮ Unnamed Item ⋮ A subsampling approach for Bayesian model selection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- User-friendly tail bounds for sums of random matrices
- Introductory lectures on convex optimization. A basic course.
- Oracle complexity of second-order methods for smooth convex optimization
- Randomized Sketches of Convex Programs With Sharp Guarantees
- Uniform Sampling for Matrix Approximation
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Nearly Tight Oblivious Subspace Embeddings by Trace Inequalities
- RES: Regularized Stochastic BFGS Algorithm
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Finding approximate local minima faster than gradient descent
- Katyusha: the first direct acceleration of stochastic gradient methods
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A Family of Variable-Metric Methods Derived by Variational Means
- The Convergence of a Class of Double-rank Minimization Algorithms
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: