Nonredundant sparse feature extraction using autoencoders with receptive fields clustering
DOI10.1016/J.NEUNET.2017.04.012zbMATH Open1429.68248OpenAlexW2611457475WikidataQ38759432 ScholiaQ38759432MaRDI QIDQ2292197FDOQ2292197
Authors: Babajide O. Ayinde, Jacek M. Zurada
Publication date: 3 February 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2017.04.012
Recommendations
- Approximating morphological operators with part-based representations learned by asymmetric auto-encoders
- Generative model of autoencoders self-learning on images represented by count samples
- Subspace clustering using a low-rank constrained autoencoder
- Part-based approximations for morphological operators using asymmetric auto-encoders
- Reducing the Dimensionality of Data with Neural Networks
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Artificial neural networks and deep learning (68T07)
Cites Work
- Visualizing data using t-SNE
- Title not available (Why is that?)
- Reducing the Dimensionality of Data with Neural Networks
- Learning deep architectures for AI
- A Limited Memory Algorithm for Bound Constrained Optimization
- A Fast Learning Algorithm for Deep Belief Nets
- Two-layer contractive encodings for learning stable nonlinear features
Cited In (4)
- Part-based approximations for morphological operators using asymmetric auto-encoders
- Nonparametric guidance of autoencoder representations using label information
- Sparse Codes Auto-Extractor for Classification: A Joint Embedding and Dictionary Learning Framework for Representation
- Approximating morphological operators with part-based representations learned by asymmetric auto-encoders
Uses Software
This page was built for publication: Nonredundant sparse feature extraction using autoencoders with receptive fields clustering
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2292197)