Visualizing probabilistic models and data with intensive principal component analysis

From MaRDI portal
Publication:5218558

DOI10.1073/PNAS.1817218116zbMATH Open1431.62255arXiv1810.02877OpenAlexW2956072795WikidataQ92998871 ScholiaQ92998871MaRDI QIDQ5218558FDOQ5218558


Authors: Katherine N. Quinn, Colin B. Clement, Francesco de Bernardis, Michael D. Niemack, James P. Sethna Edit this on Wikidata


Publication date: 4 March 2020

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Abstract: Unsupervised learning makes manifest the underlying structure of data without curated training and specific problem definitions. However, the inference of relationships between data points is frustrated by the `curse of dimensionality' in high-dimensions. Inspired by replica theory from statistical mechanics, we consider replicas of the system to tune the dimensionality and take the limit as the number of replicas goes to zero. The result is the intensive embedding, which is not only isometric (preserving local distances) but allows global structure to be more transparently visualized. We develop the Intensive Principal Component Analysis (InPCA) and demonstrate clear improvements in visualizations of the Ising model of magnetic spins, a neural network, and the dark energy cold dark matter ({Lambda}CDM) model as applied to the Cosmic Microwave Background.


Full work available at URL: https://arxiv.org/abs/1810.02877




Recommendations




Cited In (1)





This page was built for publication: Visualizing probabilistic models and data with intensive principal component analysis

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5218558)