Chaining meets chain rule: multilevel entropic regularization and training of neural networks
From MaRDI portal
Publication:4969253
Recommendations
Cites work
- scientific article; zbMATH DE number 5544465 (Why is no real title available?)
- scientific article; zbMATH DE number 3595981 (Why is no real title available?)
- scientific article; zbMATH DE number 6781368 (Why is no real title available?)
- Fifty years of Shannon theory
- From -entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- High-dimensional probability. An introduction with applications in data science
- Information-theoretic upper and lower bounds for statistical estimation
- Learners that use little information
- On prediction of individual sequences
- On the properties of variational approximations of Gibbs posteriors
- Probability and stochastics.
- Rényi Divergence and Kullback-Leibler Divergence
- Sparse estimation by exponential weighting
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- Upper and lower bounds for stochastic processes. Modern methods and classical problems
Cited in
(2)
This page was built for publication: Chaining meets chain rule: multilevel entropic regularization and training of neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969253)