Bayesian feature selection with strongly regularizing priors maps to the Ising model

From MaRDI portal
Publication:5380347

DOI10.1162/NECO_A_00780zbMATH Open1472.62094DBLPjournals/neco/FisherM15arXiv1411.0591OpenAlexW2154138011WikidataQ40530502 ScholiaQ40530502MaRDI QIDQ5380347FDOQ5380347


Authors: Charles K. Fisher, Pankaj Mehta Edit this on Wikidata


Publication date: 4 June 2019

Published in: Neural Computation (Search for Journal in Brave)

Abstract: Identifying small subsets of features that are relevant for prediction and/or classification tasks is a central problem in machine learning and statistics. The feature selection task is especially important, and computationally difficult, for modern datasets where the number of features can be comparable to, or even exceed, the number of samples. Here, we show that feature selection with Bayesian inference takes a universal form and reduces to calculating the magnetizations of an Ising model, under some mild conditions. Our results exploit the observation that the evidence takes a universal form for strongly-regularizing priors --- priors that have a large effect on the posterior probability even in the infinite data limit. We derive explicit expressions for feature selection for generalized linear models, a large class of statistical techniques that include linear and logistic regression. We illustrate the power of our approach by analyzing feature selection in a logistic regression-based classifier trained to distinguish between the letters B and D in the notMNIST dataset.


Full work available at URL: https://arxiv.org/abs/1411.0591




Recommendations



Cites Work


Cited In (2)

Uses Software





This page was built for publication: Bayesian feature selection with strongly regularizing priors maps to the Ising model

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380347)