High-Dimensional Probability
From MaRDI portal
Publication:4643248
DOI10.1017/9781108231596zbMath1430.60005OpenAlexW4250954493MaRDI QIDQ4643248
Publication date: 24 May 2018
Full work available at URL: https://doi.org/10.1017/9781108231596
semidefinite programmingclusteringmatrix completiondimension reductionnetworksempirical processesVC dimensioncodingmachine learningconcentration inequalitiesdata sciencesparse regressioncompressed sensinghigh dimensionscovariance estimation
Introductory exposition (textbooks, tutorial papers, etc.) pertaining to statistics (62-01) Introductory exposition (textbooks, tutorial papers, etc.) pertaining to probability theory (60-01)
Related Items (only showing first 100 items - show all)
Optimal multiple change-point detection for high-dimensional data ⋮ Bounds in \(L^1\) Wasserstein distance on the normal approximation of general M-estimators ⋮ Just least squares: binary compressive sampling with low generative intrinsic dimension ⋮ Global optimization using random embeddings ⋮ Provable sample-efficient sparse phase retrieval initialized by truncated power method ⋮ On unifying randomized methods for inverse problems ⋮ Nearly optimal bounds for the global geometric landscape of phase retrieval ⋮ On random matrices arising in deep neural networks: General I.I.D. case ⋮ The geometry of near ground states in Gaussian polymer models ⋮ Neural network approximation and estimation of classifiers with classification boundary in a Barron class ⋮ Hessian averaging in stochastic Newton methods achieves superlinear convergence ⋮ From \(p\)-Wasserstein bounds to moderate deviations ⋮ A Spectral Method for Joint Community Detection and Orthogonal Group Synchronization ⋮ Localization in 1D non-parametric latent space models from pairwise affinities ⋮ A bootstrap method for spectral statistics in high-dimensional elliptical models ⋮ The β-Delaunay tessellation IV: Mixing properties and central limit theorems ⋮ Near-optimal bounds for generalized orthogonal Procrustes problem via generalized power method ⋮ A simple approach for quantizing neural networks ⋮ Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery ⋮ Deep nonparametric regression on approximate manifolds: nonasymptotic error bounds with polynomial prefactors ⋮ Optimally tackling covariate shift in RKHS-based nonparametric regression ⋮ Fast convergence of empirical barycenters in Alexandrov spaces and the Wasserstein space ⋮ The effect of intrinsic dimension on the Bayes-error of projected quadratic discriminant classification ⋮ On the smoothed analysis of the smallest singular value with discrete noise ⋮ On Hadamard powers of random Wishart matrices ⋮ On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes ⋮ Concentration of measure bounds for matrix-variate data with missing values ⋮ Dimension-agnostic inference using cross U-statistics ⋮ Optimal Scheduling of Entropy Regularizer for Continuous-Time Linear-Quadratic Reinforcement Learning ⋮ An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models ⋮ Asymptotic theory in bipartite graph models with a growing number of parameters ⋮ Link Prediction for Egocentrically Sampled Networks ⋮ High‐dimensional limit theorems for SGD: Effective dynamics and critical scaling ⋮ A cross-validation framework for signal denoising with applications to trend filtering, dyadic CART and beyond ⋮ Learning low-dimensional nonlinear structures from high-dimensional noisy data: an integral operator approach ⋮ Optimal subgroup selection ⋮ Quantitative stability of optimal transport maps under variations of the target measure ⋮ High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms ⋮ Bayesian Inference Using Synthetic Likelihood: Asymptotics and Adjustments ⋮ Performance bounds of the intensity-based estimators for noisy phase retrieval ⋮ Geometric sharp large deviations for random projections of \(\ell_p^n\) spheres and balls ⋮ Nonlinear model reduction for slow-fast stochastic systems near unknown invariant manifolds ⋮ Covariate-Assisted Community Detection in Multi-Layer Networks ⋮ Inference in a Class of Optimization Problems: Confidence Regions and Finite Sample Bounds on Errors in Coverage Probabilities ⋮ Expectile trace regression via low-rank and group sparsity regularization ⋮ Supervised homogeneity fusion: a combinatorial approach ⋮ Matrix deviation inequality for ℓp-norm ⋮ Tractability from overparametrization: the example of the negative perceptron ⋮ Simple proof of the risk bound for denoising by exponential weights for asymmetric noise distributions ⋮ Guarantees for Spontaneous Synchronization on Random Geometric Graphs ⋮ Primal-Dual Regression Approach for Markov Decision Processes with General State and Action Spaces ⋮ Threshold for the expected measure of random polytopes ⋮ Efficient change point detection and estimation in high-dimensional correlation matrices ⋮ Dimension-free bounds for sums of dependent matrices and operators with heavy-tailed distributions ⋮ Sharp Analysis of Sketch-and-Project Methods via a Connection to Randomized Singular Value Decomposition ⋮ Hardness of Random Optimization Problems for Boolean Circuits, Low-Degree Polynomials, and Langevin Dynamics ⋮ Hardness of (M)LWE with semi-uniform seeds ⋮ Monte Carlo Methods for Estimating the Diagonal of a Real Symmetric Matrix ⋮ An Optimal Scheduled Learning Rate for a Randomized Kaczmarz Algorithm ⋮ Random projection preserves stability with high probability ⋮ The critical mean-field Chayes–Machta dynamics ⋮ Randomized numerical linear algebra: Foundations and algorithms ⋮ Deep learning: a statistical viewpoint ⋮ Optimally Weighted PCA for High-Dimensional Heteroscedastic Data ⋮ Lower bounds on the low-distortion embedding dimension of submanifolds of \(\mathbb{R}^n\) ⋮ Approximation bounds for norm constrained neural networks with applications to regression and GANs ⋮ Branch-and-bound solves random binary IPs in poly\((n)\)-time ⋮ Solving orthogonal group synchronization via convex and low-rank optimization: tightness and landscape analysis ⋮ Optimal tail exponents in general last passage percolation via bootstrapping \& geodesic geometry ⋮ Exact matching of random graphs with constant correlation ⋮ Reinforcement Learning for Linear-Convex Models with Jumps via Stability Analysis of Feedback Controls ⋮ \(k\)-median: exact recovery in the extended stochastic ball model ⋮ Gaussian analytic functions of bounded mean oscillation ⋮ Sharp global convergence guarantees for iterative nonconvex optimization with random data ⋮ Bernoulli randomness and Bernoulli normality ⋮ Likelihood landscape and maximum likelihood estimation for the discrete orbit recovery model ⋮ Outliers in spectrum of sparse Wigner matrices ⋮ A note on the prediction error of principal component regression in high dimensions ⋮ Robust sensing of low-rank matrices with non-orthogonal sparse decomposition ⋮ Efficient joint object matching via linear programming ⋮ Friendly bisections of random graphs ⋮ Stable phase retrieval and perturbations of frames ⋮ Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance ⋮ A deep network construction that adapts to intrinsic dimensionality beyond the domain ⋮ Neural network approximation of continuous functions in high dimensions with applications to inverse problems ⋮ Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications ⋮ Improved power analysis attacks on Falcon ⋮ Oracle inequality for sparse trace regression models with exponential \(\beta\)-mixing errors ⋮ Marchenko–Pastur law with relaxed independence conditions ⋮ Robust Recovery of Low-Rank Matrices and Low-Tubal-Rank Tensors from Noisy Sketches ⋮ Rates of Bootstrap Approximation for Eigenvalues in High-Dimensional PCA ⋮ Long lines in subsets of large measure in high dimension ⋮ Spectral graph matching and regularized quadratic relaxations. I: Algorithm and Gaussian analysis ⋮ A class of dimension-free metrics for the convergence of empirical measures ⋮ An integer parallelotope with small surface area ⋮ Pathwise CVA regressions with oversimulated defaults ⋮ General stochastic separation theorems with optimal bounds ⋮ Statistical guarantees for regularized neural networks ⋮ A theory of capacity and sparse neural encoding ⋮ The rate of convergence for sparse and low-rank quantile trace regression
This page was built for publication: High-Dimensional Probability