A universal approximation theorem for mixture-of-experts models

From MaRDI portal
Publication:5380595

DOI10.1162/NECO_A_00892zbMATH Open1474.68266arXiv1602.03683OpenAlexW2270588982WikidataQ39391760 ScholiaQ39391760MaRDI QIDQ5380595FDOQ5380595


Authors: Luke R. Lloyd-Jones, Hien D. Nguyen, Geoffrey J. McLachlan Edit this on Wikidata


Publication date: 5 June 2019

Published in: Neural Computation (Search for Journal in Brave)

Abstract: The mixture of experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alternative result, which shows that the class of MoE mean functions is dense in the class of all continuous functions over arbitrary compact domains of estimation. Our result can be viewed as a universal approximation theorem for MoE models.


Full work available at URL: https://arxiv.org/abs/1602.03683




Recommendations



Cites Work


Cited In (8)

Uses Software





This page was built for publication: A universal approximation theorem for mixture-of-experts models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380595)