Leveraging maximum entropy and correlation on latent factors for learning representations
DOI10.1016/J.NEUNET.2020.07.027zbMATH Open1475.68270DBLPjournals/nn/He0DZH20OpenAlexW3047349556WikidataQ99201041 ScholiaQ99201041MaRDI QIDQ2057737FDOQ2057737
Authors: Z. C. He, Jie Liu, Kai Dang, Fuzhen Zhuang, Yalou Huang
Publication date: 7 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2020.07.027
Recommendations
- Learning joint latent representations based on information maximization
- Learning with maximum-entropy distributions
- Learning latent factors in linked multi-modality data
- Latent low-rank representation
- scientific article; zbMATH DE number 7049742
- On inductive abilities of latent factor models for relational learning
Factor analysis and principal components; correspondence analysis (62H25) Learning and adaptive systems in artificial intelligence (68T05) Factorization of matrices (15A23) Measures of information, entropy (94A17)
Cites Work
- Principal component analysis.
- 10.1162/jmlr.2003.3.4-5.993
- Projected Gradient Methods for Nonnegative Matrix Factorization
- Non-negative matrix factorization with sparseness constraints
- Learning the parts of objects by non-negative matrix factorization
- A column-wise update algorithm for nonnegative matrix factorization in Bregman divergence with an orthogonal constraint
- A survey of multilinear subspace learning for tensor data
- Weakly Supervised Deep Matrix Factorization for Social Image Understanding
- Rank selection in nonnegative matrix factorization using minimum description length
Cited In (2)
Uses Software
This page was built for publication: Leveraging maximum entropy and correlation on latent factors for learning representations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2057737)