Two-layer contractive encodings for learning stable nonlinear features
DOI10.1016/J.NEUNET.2014.09.008zbMATH Open1325.68196OpenAlexW2139336101WikidataQ50630147 ScholiaQ50630147MaRDI QIDQ890731FDOQ890731
Authors: Hannes Schulz, Tapani Raiko, Sven Behnke, Kyung Hyun Cho
Publication date: 11 November 2015
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2014.09.008
Recommendations
- Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion
- Complex-valued autoencoders
- Why does unsupervised pre-training help deep learning?
- What regularized auto-encoders learn from the data-generating distribution
- Layered Networks for Unsupervised Learning
deep learninglinear transformationsemi-supervised learningmulti-layer perceptronpretrainingtwo-layer contractive encoding
Cites Work
- Reducing the Dimensionality of Data with Neural Networks
- Learning deep architectures for AI
- Why does unsupervised pre-training help deep learning?
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
- Title not available (Why is that?)
- An efficient learning procedure for deep Boltzmann machines
- On the expressive power of deep architectures
- Enhanced Gradient for Training Restricted Boltzmann Machines
Cited In (1)
Uses Software
This page was built for publication: Two-layer contractive encodings for learning stable nonlinear features
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q890731)