scientific article; zbMATH DE number 7049761
From MaRDI portal
Publication:4633055
zbMath1487.90529MaRDI QIDQ4633055
No author found.
Publication date: 2 May 2019
Full work available at URL: http://jmlr.csail.mit.edu/papers/v20/17-594.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Ridge regression; shrinkage estimators (Lasso) (62J07) Probabilistic models, generic numerical methods in probability and statistics (65C20) Convex programming (90C25)
Related Items
A Trust-region Method for Nonsmooth Nonconvex Optimization, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems, An overview of stochastic quasi-Newton methods for large-scale machine learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- On data preconditioning for regularized loss minimization
- First-order methods of smooth convex optimization with inexact oracle
- Random design analysis of ridge regression
- An optimal method for stochastic composite optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- User-friendly tail bounds for sums of random matrices
- On the limited memory BFGS method for large scale optimization
- Introductory lectures on convex optimization. A basic course.
- Sub-sampled Newton methods
- Oracle complexity of second-order methods for smooth convex optimization
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Accelerated and Inexact Forward-Backward Algorithms
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Randomized Algorithms for Matrices and Data
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
- Numerical Optimization
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- Optimization Methods for Large-Scale Machine Learning
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Optimal Distributed Online Prediction using Mini-Batches
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Exact and inexact subsampled Newton methods for optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization