On some similarities and differences between deep neural networks and kernel learning machines
From MaRDI portal
Publication:5048595
Recommendations
- On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces
- When do neural networks outperform kernel methods?*
- Performance comparisions of neural networks and machine learning techniques: A critical assessment of the methodology
- Deep vs. shallow networks: an approximation theory perspective
- Publication:4943029
Cites work
- scientific article; zbMATH DE number 45848 (Why is no real title available?)
- scientific article; zbMATH DE number 795573 (Why is no real title available?)
- An empirical demonstration of the no free lunch theorem
- Approximation and learning by greedy algorithms
- Approximation by superpositions of a sigmoidal function
- Bayesian learning for neural networks
- Deep learning
- Dropout fails to regularize nonparametric learners
- Kernelized cost-sensitive listwise ranking
- Model Selection for Optimal Prediction in Statistical Machine Learning
- Multilayer feedforward networks are universal approximators
- Neocognition: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
- On the ubiquity of the Bayesian paradigm in statistical machine learning and data science
- Preface: The multifaceted impact of statistical methodology and theory in data science
- Principles and theory for data mining and machine learning
- Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation
- Support-vector networks
- Universal approximation bounds for superpositions of a sigmoidal function
Cited in
(1)
This page was built for publication: On some similarities and differences between deep neural networks and kernel learning machines
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5048595)