On the relation of slow feature analysis and Laplacian eigenmaps
From MaRDI portal
Publication:2887008
DOI10.1162/NECO_A_00214zbMATH Open1237.68163OpenAlexW1981899483WikidataQ37932555 ScholiaQ37932555MaRDI QIDQ2887008FDOQ2887008
Authors: Henning Sprekeler
Publication date: 15 May 2012
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00214
Recommendations
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Learning Eigenfunctions Links Spectral Embedding and Kernel PCA
- On the regularized Laplacian eigenmaps
- Slow Feature Analysis: Unsupervised Learning of Invariances
- What Is the Relation Between Slow Feature Analysis and Independent Component Analysis?
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Towards a theoretical foundation for Laplacian-based manifold methods
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Non-linear independent component analysis with diffusion maps
- Semi-supervised learning on Riemannian manifolds
- Slow Feature Analysis: Unsupervised Learning of Invariances
- The first order asymptotics of the extreme eigenvectors of certain Hermitian Toeplitz matrices
- Slow Feature Analysis: A Theoretical Analysis of Optimal Free Responses
- A theoretical basis for emergent pattern discrimination in neural systems through slow feature extraction
Cited In (14)
- What Is the Relation Between Slow Feature Analysis and Independent Component Analysis?
- On the regularized Laplacian eigenmaps
- Optimal Curiosity-Driven Modular Incremental Slow Feature Analysis
- Theoretical analysis of the optimal free responses of graph-based SFA for the design of training graphs
- Incremental slow feature analysis: adaptive low-complexity slow feature updating from high-dimensional input streams
- A Maximum-Likelihood Interpretation for Slow Feature Analysis
- Effective dimensionality reduction for visualizing neural dynamics by Laplacian eigenmaps
- Nonlinear dimensionality reduction using a temporal coherence principle
- Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis
- Improved graph-based SFA: information preservation complements the slowness principle
- Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots
- Graph-based predictable feature analysis
- How to solve classification and regression problems on high-dimensional data with a supervised extension of slow feature analysis
- A dive into spectral inference networks: improved algorithms for self-supervised learning of continuous spectral representations
Uses Software
This page was built for publication: On the relation of slow feature analysis and Laplacian eigenmaps
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2887008)