Towards understanding sparse filtering: a theoretical perspective
From MaRDI portal
Publication:2179299
DOI10.1016/J.NEUNET.2017.11.010zbMATH Open1434.68483arXiv1603.08831OpenAlexW2328604275WikidataQ47323844 ScholiaQ47323844MaRDI QIDQ2179299FDOQ2179299
Authors: Fabio M. Zennaro, Ke Chen
Publication date: 12 May 2020
Published in: Neural Networks (Search for Journal in Brave)
Abstract: In this paper we present a theoretical analysis to understand sparse filtering, a recent and effective algorithm for unsupervised learning. The aim of this research is not to show whether or how well sparse filtering works, but to understand why and when sparse filtering does work. We provide a thorough theoretical analysis of sparse filtering and its properties, and further offer an experimental validation of the main outcomes of our theoretical analysis. We show that sparse filtering works by explicitly maximizing the entropy of the learned representation through the maximization of the proxy of sparsity, and by implicitly preserving mutual information between original and learned representations through the constraint of preserving a structure of the data, specifically the structure defined by relations of neighborhoodness under the cosine distance. Furthermore, we empirically validate our theoretical results with artificial and real data sets, and we apply our theoretical understanding to explain the success of sparse filtering on real-world problems. Our work provides a strong theoretical basis for understanding sparse filtering: it highlights assumptions and conditions for success behind this feature distribution learning algorithm, and provides insights for developing new feature distribution learning algorithms.
Full work available at URL: https://arxiv.org/abs/1603.08831
Recommendations
- Introduction to compressed sensing and sparse filtering
- Compressed sensing and sparse filtering
- Sparsity optimization in design of multidimensional filter networks
- Sparsity-Aware Data-Selective Adaptive Filters
- Sparse filtering under bounded exogenous disturbances
- Sparse representation for sampled-data \(H^\infty\) filters
- Sparse Filter Design Under a Quadratic Constraint: Low-Complexity Algorithms
- Sparse Signal Processing
- Nonlinear Filtering for Sparse Signal Recovery From Incomplete Measurements
- scientific article; zbMATH DE number 7385963
soft clusteringinformation preservationcosine metricfeature distribution learningintrinsic structuresparse filtering
Cites Work
- $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
- Title not available (Why is that?)
- Matching pursuits with time-frequency dictionaries
- Sparse and redundant representations. From theory to applications in signal and image processing.
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Title not available (Why is that?)
- Atomic decomposition by basis pursuit
- A Fast Learning Algorithm for Deep Belief Nets
- Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion
- Comparing Measures of Sparsity
- Information geometry and its applications
Cited In (3)
Uses Software
This page was built for publication: Towards understanding sparse filtering: a theoretical perspective
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2179299)