Slow Feature Analysis: Unsupervised Learning of Invariances
From MaRDI portal
Publication:4330665
DOI10.1162/089976602317318938zbMath0994.68591OpenAlexW2146444479WikidataQ52958217 ScholiaQ52958217MaRDI QIDQ4330665
Laurenz Wiskott, Terrence J. Sejnowski
Publication date: 10 October 2002
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/089976602317318938
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Computing methodologies and applications (68U99)
Related Items (56)
Incorporating physical constraints in a deep probabilistic machine learning framework for coarse-graining dynamical systems ⋮ On the Relation of Slow Feature Analysis and Laplacian Eigenmaps ⋮ Symbols as self-emergent entities in an optimization process of feature extraction and predic\-tions ⋮ Extracting a low-dimensional predictable time series ⋮ Learning the Nonlinearity of Neurons from Natural Visual Stimuli ⋮ Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video ⋮ Slow Feature Analysis: A Theoretical Analysis of Optimal Free Responses ⋮ Learning Optimized Features for Hierarchical Models of Invariant Object Recognition ⋮ Neural Information Processing with Feedback Modulations ⋮ A generalized probabilistic monitoring model with both random and sequential data ⋮ Learning invariant features using inertial priors ⋮ Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots ⋮ Mathematical learning theory through time ⋮ Learning transform invariant object recognition in the visual system with multiple stimuli present during training ⋮ Periodic clustering of simple and complex cells in visual cortex ⋮ A dive into spectral inference networks: improved algorithms for self-supervised learning of continuous spectral representations ⋮ Autoencoding slow representations for semi-supervised data-efficient regression ⋮ Maximum contrast analysis for nonnegative blind source separation ⋮ Graph-based predictable feature analysis ⋮ Quantum dimensionality reduction by linear discriminant analysis ⋮ Generating feature spaces for linear algorithms with regularized sparse kernel slow feature analysis ⋮ Nonlinear dimensionality reduction using a temporal coherence principle ⋮ Slow feature analysis with spiking neurons and its application to audio stimuli ⋮ Learning Spatial Invariance with the Trace Rule in Nonuniform Distributions ⋮ Improved Sparse Coding Under the Influence of Perceptual Attention ⋮ Learning Slowness in a Sparse Model of Invariant Feature Detection ⋮ Learning Visual Spatial Pooling by Strong PCA Dimension Reduction ⋮ Optimal Curiosity-Driven Modular Incremental Slow Feature Analysis ⋮ Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation ⋮ Learning invariant face recognition from examples ⋮ A Hierarchical Bayesian Model for Learning Nonlinear Statistical Regularities in Nonstationary Natural Signals ⋮ Unsupervised slow subspace-learning from stationary processes ⋮ The Interaction between Semantic Representation and Episodic Memory ⋮ Slowness as a Proxy for Temporal Predictability: An Empirical Comparison ⋮ On the Analysis and Interpretation of Inhomogeneous Quadratic Forms as Receptive Fields ⋮ A Spiking Neuron as Information Bottleneck ⋮ Exploratory analysis of climate data using source separation methods. ⋮ Towards a theoretical foundation for morphological computation with compliant bodies ⋮ Learning the Lie Groups of Visual Invariance ⋮ A Multifactor Winner-Take-All Dynamics ⋮ Colored Subspace Analysis ⋮ A Theoretical Basis for Emergent Pattern Discrimination in Neural Systems Through Slow Feature Extraction ⋮ A Principle for Learning Egocentric-Allocentric Transformation ⋮ Predictive Coding and the Slowness Principle: An Information-Theoretic Approach ⋮ A Theory of Slow Feature Analysis for Transformation-Based Input Signals with an Application to Complex Cells ⋮ Unnamed Item ⋮ Incremental Slow Feature Analysis: Adaptive Low-Complexity Slow Feature Updating from High-Dimensional Input Streams ⋮ Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics ⋮ Improved graph-based SFA: information preservation complements the slowness principle ⋮ What Is the Relation Between Slow Feature Analysis and Independent Component Analysis? ⋮ A Differential Model of the Complex Cell ⋮ Learning viewpoint invariant object representations using a temporal coherence principle ⋮ Independent Slow Feature Analysis and Nonlinear Blind Source Separation ⋮ A Maximum-Likelihood Interpretation for Slow Feature Analysis ⋮ Intrinsic modeling of stochastic dynamical systems using empirical geometry ⋮ State stabilization for gate-model quantum computers
Cites Work
This page was built for publication: Slow Feature Analysis: Unsupervised Learning of Invariances