On data preconditioning for regularized loss minimization
From MaRDI portal
(Redirected from Publication:285940)
Abstract: In this work, we study data preconditioning, a well-known and long-existing technique, for boosting the convergence of first-order methods for regularized loss minimization. It is well understood that the condition number of the problem, i.e., the ratio of the Lipschitz constant to the strong convexity modulus, has a harsh effect on the convergence of the first-order optimization methods. Therefore, minimizing a small regularized loss for achieving good generalization performance, yielding an ill conditioned problem, becomes the bottleneck for big data problems. We provide a theory on data preconditioning for regularized loss minimization. In particular, our analysis exhibits an appropriate data preconditioner and characterizes the conditions on the loss function and on the data under which data preconditioning can reduce the condition number and therefore boost the convergence for minimizing the regularized loss. To make the data preconditioning practically useful, we endeavor to employ and analyze a random sampling approach to efficiently compute the preconditioned data. The preliminary experiments validate our theory.
Recommendations
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Dual space preconditioning for gradient descent
- Faster kernel ridge regression using sketching and preconditioning
- Weighted SGD for \(\ell_p\) regression with randomized preconditioning
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
Cites work
- scientific article; zbMATH DE number 3673370 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- A proximal stochastic gradient method with progressive variance reduction
- Adaptive subgradient methods for online learning and stochastic optimization
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Improved analysis of the subsampled randomized Hadamard transform
- Introductory lectures on convex optimization. A basic course.
- Iterative Solution Methods
- On the use of stochastic Hessian information in optimization methods for machine learning
- Pegasos: primal estimated sub-gradient solver for SVM
- Revisiting the Nyström method for improved large-scale machine learning
- Sparsity and incoherence in compressive sampling
- Stochastic dual coordinate ascent methods for regularized loss minimization
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- The elements of statistical learning. Data mining, inference, and prediction
- Weighted SGD for \(\ell_p\) regression with randomized preconditioning
- ``Preconditioning for feature selection and regression in high-dimensional problems
Cited in
(4)- Preconditioning meets biased compression for efficient distributed optimization
- Sufficient dimension reduction for a novel class of zero-inflated graphical models
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
This page was built for publication: On data preconditioning for regularized loss minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q285940)