Why does deep and cheap learning work so well?

From MaRDI portal
Publication:1676557

DOI10.1007/s10955-017-1836-5zbMath1373.82061arXiv1608.08225OpenAlexW3105432754MaRDI QIDQ1676557

Henry W. Lin, David Rolnick, Max Tegmark

Publication date: 9 November 2017

Published in: Journal of Statistical Physics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1608.08225




Related Items

On PDE Characterization of Smooth Hierarchical Functions Computed by Neural NetworksDeep distributed convolutional neural networks: UniversalityUnderstanding autoencoders with information theoretic conceptsUniversal approximation with quadratic deep networksSolving second-order nonlinear evolution partial differential equations using deep learningQuantifying the separability of data classes in neural networksOn the approximation of functions by tanh neural networksA physics-constrained deep residual network for solving the sine-Gordon equationOn decision regions of narrow deep neural networksFree dynamics of feature learning processesDunkl analouge of Szász Schurer Beta bivariate operatorsTopology optimization based on deep representation learning (DRL) for compliance and stress-constrained designLinearly Recurrent Autoencoder Networks for Learning DynamicsEnforcing constraints for interpolation and extrapolation in generative adversarial networksMachine learning algorithms based on generalized Gibbs ensemblesResolution and relevance trade-offs in deep learningUnnamed ItemDeep learning acceleration of total Lagrangian explicit dynamics for soft tissue mechanicsExact maximum-entropy estimation with Feynman diagramsHierarchical deep learning neural network (HiDeNN): an artificial intelligence (AI) framework for computational science and engineeringFeatures of the spectral density of a spin systemA selective overview of deep learningA Computational Perspective of the Role of the Thalamus in CognitionOn Functions Computed on TreesProvably scale-covariant continuous hierarchical networks based on scale-normalized differential expressions coupled in cascadeReLU Networks Are Universal Approximators via Piecewise Linear or Constant FunctionsConstructive expansion for vector field theories I. Quartic models in low dimensionsPhysics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equationsDeep neural networks for rotation-invariance approximation and learningMeasurement error models: from nonparametric methods to deep neural networksOptimal adaptive control of partially uncertain linear continuous-time systems with state delayUnnamed ItemExplicitly antisymmetrized neural network layers for variational Monte Carlo simulation



Cites Work


This page was built for publication: Why does deep and cheap learning work so well?