Supervised learning with probabilistic morphisms and kernel mean embeddings
From MaRDI portal
Publication:6436166
arXiv2305.06348MaRDI QIDQ6436166FDOQ6436166
Publication date: 10 May 2023
Abstract: In this paper I propose a concept of a correct loss function in a generative model of supervised learning for an input space and a label space , both of which are measurable spaces. A correct loss function in a generative model of supervised learning must accurately measure the discrepancy between elements of a hypothesis space of possible predictors and the supervisor operator, even when the supervisor operator does not belong to . To define correct loss functions, I propose a characterization of a regular conditional probability measure for a probability measure on relative to the projection as a solution of a linear operator equation. If is a separable metrizable topological space with the Borel -algebra , I propose an additional characterization of a regular conditional probability measure as a minimizer of mean square error on the space of Markov kernels, referred to as probabilistic morphisms, from to . This characterization utilizes kernel mean embeddings. Building upon these results and employing inner measure to quantify the generalizability of a learning algorithm, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik's regularization method for solving stochastic ill-posed problems, incorporating inner measure, and showcase its applications.
Convergence of probability measures (60B10) Nonparametric estimation (62G05) Applications of functional analysis in probability theory and statistics (46N30) Higher categories and homotopical algebra (18N99)
This page was built for publication: Supervised learning with probabilistic morphisms and kernel mean embeddings
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6436166)