Finite sample performance of linear least squares estimation
From MaRDI portal
Publication:2235406
DOI10.1016/J.JFRANKLIN.2021.07.048zbMATH Open1472.93179arXiv1810.06380OpenAlexW3191905761MaRDI QIDQ2235406FDOQ2235406
Authors: Michael Krikheli, Amir Leshem
Publication date: 21 October 2021
Published in: Journal of the Franklin Institute (Search for Journal in Brave)
Abstract: Linear Least Squares is a very well known technique for parameter estimation, which is used even when sub-optimal, because of its very low computational requirements and the fact that exact knowledge of the noise statistics is not required. Surprisingly, bounding the probability of large errors with finitely many samples has been left open, especially when dealing with correlated noise with unknown covariance. In this paper we analyze the finite sample performance of the linear least squares estimator. Using these bounds we obtain accurate bounds on the tail of the estimator's distribution. We show the fast exponential convergence of the number of samples required to ensure a given accuracy with high probability. We analyze a sub-Gaussian setting with a fixed or random design matrix of the linear least squares problem. We also extend the results to the case of a martingale difference noise sequence. Our analysis method is simple and uses simple type bounds on the estimation error. We also provide probabilistic finite sample bounds on the estimation error norm. The tightness of the bounds is tested through simulation. We demonstrate that our results are tighter than previously proposed bounds for norm of the error. The proposed bounds make it possible to predict the number of samples required for least squares estimation even when the least squares is sub-optimal and is used for computational simplicity.
Full work available at URL: https://arxiv.org/abs/1810.06380
Recommendations
- Error bounds for computed least squares estimators
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator
- Loss minimization and parameter estimation with heavy tails
- A new perspective on least squares under convex constraint
- Finite sample properties of linear model identification
Estimation and detection in stochastic control theory (93E10) Least squares and related methods for stochastic control systems (93E24)
Cites Work
- Simultaneous analysis of Lasso and Dantzig selector
- Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation
- On asymptotically optimal confidence regions and tests for high-dimensional models
- Confidence sets in sparse regression
- High-dimensional statistics. A non-asymptotic viewpoint
- Asymptotic properties of general autoregressive models and strong consistency of least-squares estimates of their parameters
- Robust linear least squares regression
- A tail inequality for quadratic forms of subgaussian random vectors
- User-friendly tail bounds for sums of random matrices
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems
- Asymptotic theory of nonlinear least squares estimation
- Recursive consistent estimation with bounded noise
- Concentration of Measure for the Analysis of Randomized Algorithms
- Random design analysis of ridge regression
- Direction-of-arrival estimation for constant modulus signals.
- One-bit compressed sensing with non-Gaussian measurements
- The lower tail of random quadratic forms with applications to ordinary least squares
- A large deviation result for parameter estimators and its application to nonlinear regression analysis
- Strong consistency of least squares estimators in linear regression models
- The rate of convergence of the least squares estimator in a non-linear regression model with dependent errors
- Sequential parameter estimation of time-varying non-Gaussian autoregressive processes
- A note on strong consistency of least squares estimators in regression models with martingale difference errors
- An improvement of convergence rate estimates in the Lyapunov theorem
- On the exponential rate of convergence of the least squares estimator in the nonlinear regression model with Gaussian errors
- Linear Least Squares Approach for Accurate Received Signal Strength Based Source Localization
- Finite Sample Efficiency of Ordinary Least Squares in the Linear Regression Model with Autocorrelated Errors
- Robust Soft-Decision Interpolation Using Weighted Least Squares
- A large deviation result for the least squares estimators in nonlinear regression
- On the Asymptotic Theory of Fixed-Size Sequential Confidence Bounds for Linear Regression Parameters
- Wiener Filters in Gaussian Mixture Signal Estimation With $\ell _\infty $ -Norm Error
- Inequalities for quantiles of the chi-square distribution
- Non-parametric likelihood based channel estimator for Gaussian mixture noise
- On non-asymptotic bounds for estimation in generalized linear models with highly correlated design
- The sample complexity of learning linear predictors with the squared loss
- A concentration bound for stochastic approximation via Alekseev's formula
- What is a Martingale?
- A General Class of Outage Error Probability Lower Bounds in Bayesian Parameter Estimation
Cited In (5)
- Finite sample stability properties of the least median of squares estimator
- Performance Analysis of Linear-Equality-Constrained Least-Squares Estimation
- Data-driven fault detection for Lipschitz nonlinear systems: from open to closed-loop systems
- Finite impulse response models: a non-asymptotic analysis of the least squares estimator
- Analyzing the number of samples required for an approximate Monte-Carlo LMS line estimator
This page was built for publication: Finite sample performance of linear least squares estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2235406)