Learning functions of few arbitrary linear parameters in high dimensions
From MaRDI portal
Publication:434415
DOI10.1007/s10208-012-9115-yzbMath1252.65036arXiv1008.3043OpenAlexW1644425553MaRDI QIDQ434415
Karin Schnass, Jan Vybíral, Massimo Fornasier
Publication date: 10 July 2012
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1008.3043
compressed sensingChernoff bounds for sums of positive semidefinite matriceshigh-dimensional function approximationstability bounds for invariant subspaces of singular value decompositions
Random matrices (probabilistic aspects) (60B20) Algorithmic information theory (Kolmogorov complexity, etc.) (68Q30) Algorithms for approximation of functions (65D15) Algorithmic randomness and dimension (03D32)
Related Items
A Survey of Compressed Sensing ⋮ Certified dimension reduction in nonlinear Bayesian inverse problems ⋮ The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach ⋮ Global optimization using random embeddings ⋮ Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings ⋮ Approximation of curve-based sleeve functions in high dimensions ⋮ Complexity of approximation of functions of few variables in high dimensions ⋮ Recovery guarantees for polynomial coefficients from weakly dependent data with outliers ⋮ The recovery of ridge functions on the hypercube suffers from the curse of dimensionality ⋮ Data-Driven Polynomial Ridge Approximation Using Variable Projection ⋮ Embedded ridge approximations ⋮ Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness ⋮ On recovery of regular ridge functions ⋮ Dimension Reduction via Gaussian Ridge Functions ⋮ Estimating multi-index models with response-conditional least squares ⋮ Learning non-parametric basis independent models from point queries via low-rank methods ⋮ Entropy and sampling numbers of classes of ridge functions ⋮ Inverse regression for ridge recovery: a data-driven approach for parameter reduction in computer experiments ⋮ Gradient-Based Dimension Reduction of Multivariate Vector-Valued Functions ⋮ A near-stationary subspace for ridge approximation ⋮ Approximation of generalized ridge functions in high dimensions ⋮ Sparse mixture models inspired by ANOVA decompositions ⋮ Gaussian Quadrature and Polynomial Approximation for One-Dimensional Ridge Functions ⋮ On some aspects of approximation of ridge functions ⋮ Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples ⋮ Generalization bounds for sparse random feature expansions ⋮ Interpretable Approximation of High-Dimensional Data ⋮ Information theory and recovery algorithms for data fusion in Earth observation ⋮ Recovery of regular ridge functions on the ball ⋮ Robust and resource-efficient identification of two hidden layer neural networks ⋮ On two continuum armed bandit problems in high dimensions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sums of random Hermitian matrices and an inequality by Rudelson
- User-friendly tail bounds for sums of random matrices
- Instance-optimality in probability with an \(\ell _1\)-minimization decoder
- Tractability of multivariate problems. Volume I: Linear information
- A note on guaranteed sparse recovery via \(\ell_1\)-minimization
- A simple proof of the restricted isometry property for random matrices
- Optimal reconstruction of a function from its projections
- Harmonic analysis of neural networks
- Ridgelets: estimating with ridge functions
- Approximation of infinitely differentiable multivariate functions is intractable
- Capturing ridge functions in high dimensions from point queries
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Compressed sensing and best 𝑘-term approximation
- Sampling from large matrices
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Strong converse for identification via quantum channels
- On Projection Algorithms for Solving Convex Feasibility Problems
- Ridgelets: a key to higher-dimensional intermittency?
- Compressive Sensing
- Stable signal recovery from incomplete and inaccurate measurements
- Perturbation bounds in connection with singular value decomposition
- Compressed sensing