scientific article

From MaRDI portal

zbMath1067.46008MaRDI QIDQ2760174

Kenneth R. Davidson, Stanislaw J. Szarek

Publication date: 2001


Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items

On the Expectation of Operator Norms of Random Matrices, Frames and the Feichtinger conjecture, Randomized numerical linear algebra: Foundations and algorithms, Random sections of ellipsoids and the power of random information, Distance to normal elements in 𝐶*-algebras of real rank zero, Dimension-free bounds for largest singular values of matrix Gaussian series, Malnormal matrices, Improved Bounds for Small-Sample Estimation, Simpler is better: a comparative study of randomized pivoting algorithms for CUR and interpolative decompositions, The Complexity of Diagonalization, Matrix concentration inequalities and free probability, Universality for the Conjugate Gradient and MINRES Algorithms on Sample Covariance Matrices, Randomized Low-Rank Approximation for Symmetric Indefinite Matrices, Debiasing convex regularized estimators and interval estimation in linear models, Sparse constrained projection approximation subspace tracking, Norms of structured random matrices, Fast randomized numerical rank estimation for numerically low-rank matrices, A fast randomized algorithm for computing an approximate null space, Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence, Tightness of a New and Enhanced Semidefinite Relaxation for MIMO Detection, Concentration for noncommutative polynomials in random matrices, The conjugate gradient algorithm on well-conditioned Wishart matrices is almost deterministic, Unnamed Item, Low-Rank Approximation of a Matrix: Novel Insights, New Progress, and Extensions, Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Instrumental variables estimation with many weak instruments using regularized JIVE, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Counterexamples to the maximal \(p\)-norm multiplicativity conjecture for all \(p>1\), For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution, Almost commuting self-adjoint matrices: The real and self-dual cases, Sparse recovery from extreme eigenvalues deviation inequalities, Spectral Methods for Passive Imaging: Nonasymptotic Performance and Robustness, For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution, Self-Sustaining Iterated Learning, On the Limiting Shape of Young Diagrams Associated with Inhomogeneous Random Words, Rank $2r$ Iterative Least Squares: Efficient Recovery of Ill-Conditioned Low Rank Matrices from Few Entries, Nonadditivity of Rényi entropy and Dvoretzky’s theorem, Sampling convex bodies: a random matrix approach, Low-Rank Matrix Estimation from Rank-One Projections by Unlifted Convex Optimization, Estimating Leverage Scores via Rank Revealing Methods and Randomization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Discussion: Latent variable graphical model selection via convex optimization, Unnamed Item, Sublinear Cost Low Rank Approximation via Subspace Sampling, Orthogonal Trace-Sum Maximization: Tightness of the Semidefinite Relaxation and Guarantee of Locally Optimal Solutions, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Twice Is Enough for Dangerous Eigenvalues, Numerically safe Gaussian elimination with no pivoting, Average-case complexity without the black swans, Concentration of norms and eigenvalues of random matrices, Diameters of sections and coverings of convex bodies, Marčenko-Pastur law for Tyler's M-estimator, High-dimensional analysis of semidefinite relaxations for sparse principal components, Reduced-rank estimation for ill-conditioned stochastic linear model with high signal-to-noise ratio, Bernstein-von Mises theorems for functionals of the covariance matrix, Sharp nonasymptotic bounds on the norm of random matrices with independent entries, Approximation by matrices with simple spectra, Phase retrieval with one or two diffraction patterns by alternating projections with the null initialization, New studies of randomized augmentation and additive preprocessing, On the effect of noisy measurements of the regressor in functional linear models, Minimax bounds for sparse PCA with noisy high-dimensional data, PROMP: a sparse recovery approach to lattice-valued signals, Minimax rate of testing in sparse linear regression, Phase retrieval using alternating minimization in a batch setting, Robust sparse covariance estimation by thresholding Tyler's M-estimator, Simple bounds for recovering low-complexity models, Hypothesis testing for regional quantiles, On singular values of matrices with independent rows, Entanglement, quantum randomness, and complexity beyond scrambling, The convex geometry of linear inverse problems, Hyperreflexivity and operator ideals, Application of second generation wavelets to blind spherical deconvolution, Correlated variables in regression: clustering and sparse estimation, High-dimensionality effects in the Markowitz problem and other quadratic programs with linear constraints: risk underestimation, Compressed sensing and matrix completion with constant proportion of corruptions, Randomized preprocessing versus pivoting, Discussion: Latent variable graphical model selection via convex optimization, Rejoinder: Latent variable graphical model selection via convex optimization, Adaptive covariance matrix estimation through block thresholding, Concentration of empirical distribution functions with applications to non-i.i.d. models, Statistical and computational limits for sparse matrix detection, Support union recovery in high-dimensional multivariate regression, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, Group symmetry and covariance regularization, Estimating networks with jumps, Blockwise SVD with error in the operator and application to blind deconvolution, Estimation of Gaussian graphs by model selection, Theoretical properties of Cook's PFC dimension reduction algorithm for linear regression, Detection boundary in sparse regression, Low rank multivariate regression, Numerically erasure-robust frames, Solving linear systems of equations with randomization, augmentation and aggregation, On concentration of empirical measures and convergence to the semi-circle law, An analytic study of the reversal of Hartmann flows by rotating magnetic fields, A numerical algorithm for zero counting. III: Randomization and condition, On the Banach-Mazur distance to cross-polytope, TAP free energy, spin glasses and variational inference, Canonical correlation coefficients of high-dimensional Gaussian vectors: finite rank case, The road to deterministic matrices with the restricted isometry property, On the relation between an operator and its self-commutator, Numerical range for random matrices, The sparsity and bias of the LASSO selection in high-dimensional linear regression, Learning semidefinite regularizers, An efficient numerical method for condition number constrained covariance matrix approximation, Interpreting latent variables in factor models via convex optimization, Detecting Markov random fields hidden in white noise, Random Laplacian matrices and convex relaxations, Invertibility of sparse non-Hermitian matrices, Nonlinear estimation for linear inverse problems with error in the operator, Hyperreflexivity of the space of module homomorphisms between non-commutative \(L^p\)-spaces, Saving phase: injectivity and stability for phase retrieval, User-friendly tail bounds for sums of random matrices, The Littlewood-Offord problem and invertibility of random matrices, Operator spaces with prescribed sets of completely bounded maps, Smallest singular value of random matrices and geometry of random polytopes, Randomized preprocessing of homogeneous linear systems of equations, A global homogeneity test for high-dimensional linear regression, High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression, Saturating constructions for normed spaces. II, High-dimensional Gaussian model selection on a Gaussian design, Nearly unbiased variable selection under minimax concave penalty, An upper bound on the smallest singular value of a square random matrix, Quantum expanders and geometry of operator spaces, No-gaps delocalization for general random matrices, Latent variable graphical model selection via convex optimization, Finite sample approximation results for principal component analysis: A matrix perturbation approach, Lasso-type recovery of sparse representations for high-dimensional data, Distributed noise-shaping quantization. I: Beta duals of finite frames and near-optimal quantization of random measurements, Robust covariance and scatter matrix estimation under Huber's contamination model, Sampled forms of functional PCA in reproducing kernel Hilbert spaces, Erasure recovery matrices for encoder protection, Consistency of restricted maximum likelihood estimators of principal components, The spectral norm of random lifts of matrices, On the strong restricted isometry property of Bernoulli random matrices, High dimensional random sections of isotropic convex bodies, Rigorous restricted isometry property of low-dimensional subspaces, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, Preconditioning for orthogonal matching pursuit with noisy and random measurements: the Gaussian case, On the concentration of eigenvalues of random symmetric matrices, Optimal estimation and rank detection for sparse spiked covariance matrices, Preconditioning the Lasso for sign consistency, Asymptotic normality of robust \(M\)-estimators with convex penalty, Sparse learning via Boolean relaxations, Random multipliers numerically stabilize Gaussian and block Gaussian elimination: proofs and an extension to low-rank approximation, Robust computation of linear models by convex relaxation, Lower estimates for the singular values of random matrices