Two-layer contractive encodings for learning stable nonlinear features
From MaRDI portal
Publication:890731
Recommendations
- Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion
- Complex-valued autoencoders
- Why does unsupervised pre-training help deep learning?
- What regularized auto-encoders learn from the data-generating distribution
- Layered Networks for Unsupervised Learning
Cites work
- scientific article; zbMATH DE number 3174791 (Why is no real title available?)
- scientific article; zbMATH DE number 46318 (Why is no real title available?)
- An efficient learning procedure for deep Boltzmann machines
- Enhanced gradient for training restricted Boltzmann machines
- Learning deep architectures for AI
- Multilayer feedforward networks are universal approximators
- On the expressive power of deep architectures
- Reducing the Dimensionality of Data with Neural Networks
- Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
- Why does unsupervised pre-training help deep learning?
Cited in
(2)
This page was built for publication: Two-layer contractive encodings for learning stable nonlinear features
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q890731)