Why does deep and cheap learning work so well?
From MaRDI portal
Publication:1676557
DOI10.1007/s10955-017-1836-5zbMath1373.82061arXiv1608.08225OpenAlexW3105432754MaRDI QIDQ1676557
Henry W. Lin, David Rolnick, Max Tegmark
Publication date: 9 November 2017
Published in: Journal of Statistical Physics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1608.08225
Learning and adaptive systems in artificial intelligence (68T05) Neural nets applied to problems in time-dependent statistical mechanics (82C32)
Related Items (33)
On PDE Characterization of Smooth Hierarchical Functions Computed by Neural Networks ⋮ Deep distributed convolutional neural networks: Universality ⋮ Understanding autoencoders with information theoretic concepts ⋮ Universal approximation with quadratic deep networks ⋮ Solving second-order nonlinear evolution partial differential equations using deep learning ⋮ Quantifying the separability of data classes in neural networks ⋮ On the approximation of functions by tanh neural networks ⋮ A physics-constrained deep residual network for solving the sine-Gordon equation ⋮ On decision regions of narrow deep neural networks ⋮ Free dynamics of feature learning processes ⋮ Dunkl analouge of Szász Schurer Beta bivariate operators ⋮ Topology optimization based on deep representation learning (DRL) for compliance and stress-constrained design ⋮ Linearly Recurrent Autoencoder Networks for Learning Dynamics ⋮ Enforcing constraints for interpolation and extrapolation in generative adversarial networks ⋮ Machine learning algorithms based on generalized Gibbs ensembles ⋮ Resolution and relevance trade-offs in deep learning ⋮ Unnamed Item ⋮ Deep learning acceleration of total Lagrangian explicit dynamics for soft tissue mechanics ⋮ Exact maximum-entropy estimation with Feynman diagrams ⋮ Hierarchical deep learning neural network (HiDeNN): an artificial intelligence (AI) framework for computational science and engineering ⋮ Features of the spectral density of a spin system ⋮ A selective overview of deep learning ⋮ A Computational Perspective of the Role of the Thalamus in Cognition ⋮ On Functions Computed on Trees ⋮ Provably scale-covariant continuous hierarchical networks based on scale-normalized differential expressions coupled in cascade ⋮ ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions ⋮ Constructive expansion for vector field theories I. Quartic models in low dimensions ⋮ Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Measurement error models: from nonparametric methods to deep neural networks ⋮ Optimal adaptive control of partially uncertain linear continuous-time systems with state delay ⋮ Unnamed Item ⋮ Explicitly antisymmetrized neural network layers for variational Monte Carlo simulation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Multilayer feedforward networks are universal approximators
- Gaussian elimination is not optimal
- Deep vs. shallow networks: An approximation theory perspective
- 10.1162/153244303765208368
- On the Expressive Power of Deep Architectures
- Information Theory and Statistical Mechanics
- Learning Deep Architectures for AI
- Powers of tensors and fast matrix multiplication
- Solving the quantum many-body problem with artificial neural networks
- Structural risk minimization over data-dependent hierarchies
- Causal structure of the entanglement renormalization ansatz
- Hierarchical model of natural images and the origin of scale invariance
- Statistical Physics of Fields
- Elements of Information Theory
- On Information and Sufficiency
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Why does deep and cheap learning work so well?