Faster least squares approximation
From MaRDI portal
Publication:623334
DOI10.1007/s00211-010-0331-6zbMath1218.65037arXiv0710.1435OpenAlexW2007399394MaRDI QIDQ623334
Publication date: 14 February 2011
Published in: Numerische Mathematik (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0710.1435
optimizationmethod of least squaresoverdetermined systemQR decompositiondirect methodsaccurate relative-error approximationoverconstrained least-squares problemprojection-based randomized algorithmrandomized Hadamard transformsampling-based randomized algorithm
Related Items (89)
Optimal Distributed Subsampling for Maximum Quasi-Likelihood Estimators With Massive Data ⋮ Functional principal subspace sampling for large scale functional data analysis ⋮ Principal eigenvector localization and centrality in networks: revisited ⋮ Fast Quantum Algorithms for Least Squares Regression and Statistic Leverage Scores ⋮ A two-stage optimal subsampling estimation for missing data problems with large-scale data ⋮ Randomized Spectral Clustering in Large-Scale Stochastic Block Models ⋮ Randomized Iterative Methods for Linear Systems ⋮ Sampling Kaczmarz-Motzkin method for linear feasibility problems: generalization and acceleration ⋮ Inversion-free subsampling Newton's method for large sample logistic regression ⋮ Optimal Sampling for Generalized Linear Models Under Measurement Constraints ⋮ Unnamed Item ⋮ Semi-Infinite Linear Regression and Its Applications ⋮ Perturbations of the \textsc{Tcur} decomposition for tensor valued data in the Tucker format ⋮ Far-field compression for fast kernel summation methods in high dimensions ⋮ Sharper Bounds for Regularized Data Fitting ⋮ Accelerated sampling Kaczmarz Motzkin algorithm for the linear feasibility problem ⋮ Surface temperature monitoring in liver procurement via functional variance change-point analysis ⋮ Simple backward error bounds for linear least-squares problems ⋮ Guarantees for the Kronecker fast Johnson-Lindenstrauss transform using a coherence and sampling argument ⋮ Practical Leverage-Based Sampling for Low-Rank Tensor Decomposition ⋮ Faster Kernel Ridge Regression Using Sketching and Preconditioning ⋮ Zeroth-order optimization with orthogonal random directions ⋮ Structural conditions for projection-cost preservation via randomized matrix multiplication ⋮ Optimal subsampling for large‐sample quantile regression with massive data ⋮ A literature survey of matrix methods for data science ⋮ Random design analysis of ridge regression ⋮ Distributed penalized modal regression for massive data ⋮ M-IHS: an accelerated randomized preconditioning method avoiding costly matrix decompositions ⋮ Subsampling and Jackknifing: A Practically Convenient Solution for Large Data Analysis With Limited Computational Resources ⋮ Sketched approximation of regularized canonical correlation analysis ⋮ Model constraints independent optimal subsampling probabilities for softmax regression ⋮ Faster least squares approximation ⋮ Optimal subsampling for softmax regression ⋮ Generalized linear models for massive data via doubly-sketching ⋮ Unnamed Item ⋮ A block-randomized stochastic method with importance sampling for CP tensor decomposition ⋮ Optimal subsampling algorithms for composite quantile regression in massive data ⋮ Preconditioners for Krylov subspace methods: An overview ⋮ Adaptive iterative Hessian sketch via \(A\)-optimal subsampling ⋮ Admissible subspaces and the subspace iteration method ⋮ Subsampling in longitudinal models ⋮ Dense fast random projections and Lean Walsh transforms ⋮ Unnamed Item ⋮ Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence ⋮ Structural Convergence Results for Approximation of Dominant Subspaces from Block Krylov Spaces ⋮ Sketching for Principal Component Regression ⋮ An investigation of Newton-Sketch and subsampled Newton methods ⋮ Two-subspace projection method for coherent overdetermined systems ⋮ Structured Random Sketching for PDE Inverse Problems ⋮ A Practical Randomized CP Tensor Decomposition ⋮ Fast quantum algorithms for least squares regression and statistic leverage scores ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Sublinear update time randomized algorithms for dynamic graph regression ⋮ A count sketch maximal weighted residual Kaczmarz method for solving highly overdetermined linear systems ⋮ Dimensionality reduction with subgaussian matrices: a unified theory ⋮ On b-bit min-wise hashing for large-scale regression and classification with sparse data ⋮ Unbiased predictive risk estimation of the Tikhonov regularization parameter: convergence with increasing rank approximations of the singular value decomposition ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Pass-Efficient Randomized Algorithms for Low-Rank Matrix Approximation Using Any Number of Views ⋮ Proximal algorithms and temporal difference methods for solving fixed point problems ⋮ Multiplicative perturbation bounds for multivariate multiple linear regression in Schatten \(p\)-norms ⋮ Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning ⋮ On the perturbation of an \(L^2\)-orthogonal projection ⋮ Random projections for Bayesian regression ⋮ Optimal Subsampling for Large Sample Logistic Regression ⋮ On the perturbation of the Moore-Penrose inverse of a matrix ⋮ Stochastic boundary methods of fundamental solutions for solving PDEs ⋮ The Fast Cauchy Transform and Faster Robust Linear Regression ⋮ Sampled Tikhonov regularization for large linear inverse problems ⋮ General Error Estimates for the Longstaff–Schwartz Least-Squares Monte Carlo Algorithm ⋮ Frequent Directions: Simple and Deterministic Matrix Sketching ⋮ Compressed and Penalized Linear Regression ⋮ Model-robust subdata selection for big data ⋮ Information-Based Optimal Subdata Selection for Big Data Linear Regression ⋮ Random projections for the nonnegative least-squares problem ⋮ On Computationally Tractable Selection of Experiments in Measurement-Constrained Regression Models ⋮ Fast dimension reduction using Rademacher series on dual BCH codes ⋮ High-dimensional model recovery from random sketched data by exploring intrinsic sparsity ⋮ Randomized Approximation of the Gram Matrix: Exact Computation and Probabilistic Bounds ⋮ Estimating Leverage Scores via Rank Revealing Methods and Randomization ⋮ Optimal subsampling for composite quantile regression in big data ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Optimal subsampling for least absolute relative error estimators with massive data ⋮ Unnamed Item ⋮ Subdata selection algorithm for linear model discrimination
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Faster least squares approximation
- Sums of random Hermitian matrices and an inequality by Rudelson
- Matrix multiplication via arithmetic progressions
- Generalized inverses. Theory and applications.
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- A fast randomized algorithm for overdetermined linear least-squares regression
- Blendenpik: Supercharging LAPACK's Least-Squares Solver
- On variants of the Johnson–Lindenstrauss lemma
- Sampling from large matrices
- Sampling algorithms for l2 regression and applications
- A Fast Random Sampling Algorithm for Sparsifying Matrices
- Relative-Error $CUR$ Matrix Decompositions
- A generalization of the fast LUP matrix decomposition algorithm and applications
- Numerical linear algebra in the streaming model
- A fast and efficient algorithm for low-rank approximation of a matrix
- Spectral techniques applied to sparse random graphs
- Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
- Comparison of the Method of Averages with the Method of Least Squares
This page was built for publication: Faster least squares approximation