Faster least squares approximation
From MaRDI portal
Abstract: Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets be the number of constraints and be the number of variables, with . Then, existing exact methods find a solution vector in time. We present two randomized algorithms that provide very accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard Transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if is sufficiently larger than , the approximate solution can be computed in time.
Recommendations
- A fast randomized algorithm for overdetermined linear least-squares regression
- Fast least squares approximation using tensor products of functions and linear forms
- A fast randomized algorithm for orthogonal projection
- A fast and efficient algorithm for low-rank approximation of a matrix
- Random projections for the nonnegative least-squares problem
Cites work
- scientific article; zbMATH DE number 5764801 (Why is no real title available?)
- scientific article; zbMATH DE number 4072104 (Why is no real title available?)
- scientific article; zbMATH DE number 47363 (Why is no real title available?)
- scientific article; zbMATH DE number 819814 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- scientific article; zbMATH DE number 967931 (Why is no real title available?)
- A Fast Random Sampling Algorithm for Sparsifying Matrices
- A fast and efficient algorithm for low-rank approximation of a matrix
- A fast randomized algorithm for overdetermined linear least-squares regression
- A generalization of the fast LUP matrix decomposition algorithm and applications
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- Blendenpik: Supercharging LAPACK's Least-Squares Solver
- Comparison of the Method of Averages with the Method of Least Squares
- Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
- Faster least squares approximation
- Generalized inverses. Theory and applications.
- Matrix multiplication via arithmetic progressions
- Numerical linear algebra in the streaming model
- On variants of the Johnson–Lindenstrauss lemma
- Relative-Error $CUR$ Matrix Decompositions
- Sampling algorithms for \(l_2\) regression and applications
- Sampling from large matrices
- Spectral techniques applied to sparse random graphs
- Sums of random Hermitian matrices and an inequality by Rudelson
Cited in
(only showing first 100 items - show all)- Adaptive iterative Hessian sketch via A-optimal subsampling
- An Efficient Algorithm for the Classical Least Squares Approximation
- Sketched approximation of regularized canonical correlation analysis
- Optimal subsampling for modal regression in massive data
- Structural conditions for projection-cost preservation via randomized matrix multiplication
- Faster least squares approximation
- Faster kernel ridge regression using sketching and preconditioning
- Simple backward error bounds for linear least-squares problems
- LSRN: A parallel iterative solver for strongly over- or underdetermined systems
- Optimal subsampling algorithms for big data regressions
- A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems
- Proximal algorithms and temporal difference methods for solving fixed point problems
- Optimal subsampling for composite quantile regression in big data
- Random projections for the nonnegative least-squares problem
- Random reordering in SOR-type methods
- Generalized linear models for massive data via doubly-sketching
- Sublinear update time randomized algorithms for dynamic graph regression
- A block-randomized stochastic method with importance sampling for CP tensor decomposition
- Fast dimension reduction using Rademacher series on dual BCH codes
- Pass-efficient randomized algorithms for low-rank matrix approximation using any number of views
- A random sampling algorithm for fully-connected tensor network decomposition with applications
- Perturbations of the \textsc{Tcur} decomposition for tensor valued data in the Tucker format
- Estimating Leverage Scores via Rank Revealing Methods and Randomization
- Optimal subsampling for softmax regression
- Distributed penalized modal regression for massive data
- Fast and accurate randomized algorithms for linear systems and eigenvalue problems
- Subdata selection algorithm for linear model discrimination
- Dimensionality reduction with subgaussian matrices: a unified theory
- Far-field compression for fast kernel summation methods in high dimensions
- Deterministic subsampling for logistic regression with massive data
- Stochastic boundary methods of fundamental solutions for solving PDEs
- Leveraging for big data regression
- Structural Convergence Results for Approximation of Dominant Subspaces from Block Krylov Spaces
- Admissible subspaces and the subspace iteration method
- Sampling algorithms for \(l_2\) regression and applications
- A bootstrap method for error estimation in randomized matrix multiplication
- An investigation of Newton-sketch and subsampled Newton methods
- Unbiased predictive risk estimation of the Tikhonov regularization parameter: convergence with increasing rank approximations of the singular value decomposition
- Robust optimal subsampling based on weighted asymmetric least squares
- Robust and efficient subsampling algorithms for massive data logistic regression
- On \(b\)-bit min-wise hashing for large-scale regression and classification with sparse data
- Sketched ridge regression: optimization perspective, statistical perspective, and model averaging
- Weighted SGD for \(\ell_p\) regression with randomized preconditioning
- Sharper bounds for regularized data fitting
- Scalable kernel \(k\)-means clustering with Nyström approximation: relative-error bounds
- Optimal subsampling for large‐sample quantile regression with massive data
- A fast randomized algorithm for orthogonal projection
- Fast and efficient linear programming and linear least-squares computations
- Optimal subsampling for large sample logistic regression
- More efficient estimation for logistic regression with optimal subsamples
- On the perturbation of an \(L^2\)-orthogonal projection
- scientific article; zbMATH DE number 7626767 (Why is no real title available?)
- Functional principal subspace sampling for large scale functional data analysis
- Optimal Poisson subsampling for softmax regression
- Sampled Tikhonov regularization for large linear inverse problems
- Fast iterative methods for least squares estimations
- Guarantees for the Kronecker fast Johnson-Lindenstrauss transform using a coherence and sampling argument
- Zeroth-order optimization with orthogonal random directions
- Random design analysis of ridge regression
- Principal eigenvector localization and centrality in networks: revisited
- \texttt{pylspack}: parallel algorithms and data structures for sketching, column subset selection, regression, and leverage scores
- A two-stage optimal subsampling estimation for missing data problems with large-scale data
- Distributed subsampling for multiplicative regression
- Multiplicative perturbation bounds for multivariate multiple linear regression in Schatten \(p\)-norms
- Frequent directions: simple and deterministic matrix sketching
- General error estimates for the Longstaff-Schwartz least-squares Monte Carlo algorithm
- Optimal subsampling algorithms for composite quantile regression in massive data
- Sketching for principal component regression
- Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration
- On the inversion-free Newton's method and its applications
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem
- Parameter estimation of linear mixed effects model based on online update
- Randomized approximation of the Gram matrix: exact computation and probabilistic bounds
- Inversion-free subsampling Newton's method for large sample logistic regression
- Surface temperature monitoring in liver procurement via functional variance change-point analysis
- A literature survey of matrix methods for data science
- Fast and forward stable randomized algorithms for linear least-squares problems
- Practical leverage-based sampling for low-rank tensor decomposition
- A Practical Randomized CP Tensor Decomposition
- Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators With Massive Data
- A selective review on statistical methods for massive data computation: distributed computing, subsampling, and minibatch techniques
- Preconditioners for Krylov subspace methods: An overview
- Optimal subsampling for least absolute relative error estimators with massive data
- Redundancy techniques for straggler mitigation in distributed optimization and learning
- A Subsampling Method for Regression Problems Based on Minimum Energy Criterion
- Density Regression with Conditional Support Points
- Structured random sketching for PDE inverse problems
- Subsampling in longitudinal models
- On the convergence of simulation-based iterative methods for solving singular linear systems
- A fast randomized algorithm for overdetermined linear least-squares regression
- M-IHS: an accelerated randomized preconditioning method avoiding costly matrix decompositions
- On computationally tractable selection of experiments in measurement-constrained regression models
- Information-Based Optimal Subdata Selection for Big Data Linear Regression
- On the perturbation of the Moore-Penrose inverse of a matrix
- Two-subspace projection method for coherent overdetermined systems
- Optimal Sampling for Generalized Linear Models Under Measurement Constraints
- Randomized Spectral Clustering in Large-Scale Stochastic Block Models
- Subsampling and Jackknifing: A Practically Convenient Solution for Large Data Analysis With Limited Computational Resources
- Unweighted estimation based on optimal sample under measurement constraints
This page was built for publication: Faster least squares approximation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q623334)