Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
From MaRDI portal
Publication:2967608
DOI10.1137/15M1021106zbMath1456.90125arXiv1505.02250OpenAlexW2963060476MaRDI QIDQ2967608
Martin J. Wainwright, Mert Pilanci
Publication date: 1 March 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1505.02250
convex optimizationNewton's methodrandom matricesinterior point methodrandomized algorithmsrandom projectionlarge-scale problemsself-concordant functions
Generalized linear models (logistic models) (62J12) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Interior-point methods (90C51)
Related Items
Side-constrained minimum sum-of-squares clustering: mathematical programming and random projections, Functional principal subspace sampling for large scale functional data analysis, Randomized Spectral Clustering in Large-Scale Stochastic Block Models, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Randomized Newton's method for solving differential equations based on the neural network discretization, Sketch-based empirical natural gradient methods for deep learning, Unnamed Item, Sketched Newton--Raphson, Scalable subspace methods for derivative-free nonlinear least-squares optimization, Convergence analysis of a subsampled Levenberg-Marquardt algorithm, M-IHS: an accelerated randomized preconditioning method avoiding costly matrix decompositions, An overview of stochastic quasi-Newton methods for large-scale machine learning, Inexact restoration with subsampled trust-region methods for finite-sum minimization, On maximum residual nonlinear Kaczmarz-type algorithms for large nonlinear systems of equations, Generalized linear models for massive data via doubly-sketching, Unnamed Item, Randomized Quasi-Newton Updates Are Linearly Convergent Matrix Inversion Algorithms, Random projections for quadratic programs, Global optimization using random embeddings, Hessian averaging in stochastic Newton methods achieves superlinear convergence, Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory, Riemannian Natural Gradient Methods, Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions, An investigation of Newton-Sketch and subsampled Newton methods, Convergence of Newton-MR under Inexact Hessian Information, Reduced rank regression with matrix projections for high-dimensional multivariate linear regression model, Sub-sampled Newton methods, Optimization Methods for Large-Scale Machine Learning, A non-Euclidean gradient descent method with sketching for unconstrained matrix minimization, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization, Generalized self-concordant functions: a recipe for Newton-type methods, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, A hybrid stochastic optimization framework for composite nonconvex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- SGD-QN
- Faster least squares approximation
- Fast dimension reduction using Rademacher series on dual BCH codes
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- Introductory lectures on convex optimization. A basic course.
- Least angle regression. (With discussion)
- Local Rademacher complexities
- Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares
- A sparse Johnson
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- Randomized Sketches of Convex Programs With Sharp Guarantees
- Truncated-Newton algorithms for large-scale unconstrained optimization
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Sparser Johnson-Lindenstrauss Transforms
- Inexact Newton Methods
- Numerical Optimization
- Graph Sparsification by Effective Resistances