Geometric Upper Bounds on Rates of Variable-Basis Approximation

From MaRDI portal
Publication:3604968

DOI10.1109/TIT.2008.2006383zbMath1319.68177OpenAlexW2099706683MaRDI QIDQ3604968

Vera Kurková, Marcello Sanguineti

Publication date: 24 February 2009

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1109/tit.2008.2006383




Related Items

Suboptimal solutions to dynamic optimization problems via approximations of the policy functionsMinimizing sequences for a family of functional optimal estimation problemsOn the approximation of functions by tanh neural networksApproximation capabilities of neural networks on unbounded domainsConvergence rates for shallow neural networks learned by gradient descentComplexity estimates based on integral transforms induced by computational unitsLower bounds for artificial neural network approximations: a proof that shallow neural networks fail to overcome the curse of dimensionalityAccuracy of approximations of solutions to Fredholm equations by kernel methodsDynamic programming and value-function approximation in sequential decision problems: error analysis and numerical resultsCan dictionary-based computational models outperform the best linear ones?Approximate dynamic programming for stochastic \(N\)-stage optimization with application to optimal consumption under uncertaintyNew insights into Witsenhausen's counterexampleEstimates of variation with respect to a set and applications to optimization problemsSome comparisons of complexity in dictionary-based and linear computational modelsOptimization based on quasi-Monte Carlo sampling to design state estimators for non-linear systemsEstimates of the approximation error using Rademacher complexity: Learning vector-valued functionsValue and Policy Function Approximations in Infinite-Horizon Optimization ProblemsSuboptimal Policies for Stochastic $$N$$-Stage Optimization: Accuracy Analysis and a Case Study from Optimal ConsumptionHigh-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions