On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
DOI10.1016/J.NEUNET.2019.12.014zbMATH Open1434.68508DBLPjournals/nn/HayakawaS20arXiv1905.09195OpenAlexW2996782006WikidataQ92414883 ScholiaQ92414883MaRDI QIDQ2185697FDOQ2185697
Authors: Satoshi Hayakawa, Taiji Suzuki
Publication date: 5 June 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1905.09195
Recommendations
Nonparametric regression and quantile regression (62G08) Artificial neural networks and deep learning (68T07)
Cites Work
- The elements of statistical learning. Data mining, inference, and prediction
- Weak convergence and empirical processes. With applications to statistics
- Ideal spatial adaptation by wavelet shrinkage
- Pattern recognition and machine learning.
- Ten Lectures on Wavelets
- Title not available (Why is that?)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Title not available (Why is that?)
- Wavelet threshold estimation of a regression function with random design
- Minimax estimation via wavelet shrinkage
- Concentration inequalities and asymptotic results for ratio type empirical processes
- Title not available (Why is that?)
- Approximation by superpositions of a sigmoidal function
- Minimax theory of image reconstruction
- Information-theoretic determination of minimax rates of convergence
- Minimax estimation of linear functionals over nonconvex parameter spaces.
- Minimax risk over hyperrectangles, and implications
- Unconditional bases and bit-level compression
- Unconditional bases are optimal bases for data compression and for statistical estimation
- Adaptive Minimax Estimation over Sparse $\ell_q$-Hulls
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Neural network with unbounded activation functions is universal approximator
- Nonparametric regression using deep neural networks with ReLU activation function
Cited In (9)
- Deep learning theory of distribution regression with CNNs
- Rejoinder: On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning
- Optimal nonparametric inference via deep neural network
- Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space
- Adaptive deep learning for nonlinear time series models
- Consistent Sparse Deep Learning: Theory and Computation
- Title not available (Why is that?)
- Drift estimation for a multi-dimensional diffusion process using deep neural networks
- Title not available (Why is that?)
Uses Software
This page was built for publication: On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2185697)