A Universal Approximation Theorem for Mixture-of-Experts Models
From MaRDI portal
Publication:5380595
DOI10.1162/NECO_a_00892zbMath1474.68266arXiv1602.03683OpenAlexW2270588982WikidataQ39391760 ScholiaQ39391760MaRDI QIDQ5380595
Luke R. Lloyd-Jones, Hien Duy Nguyen, Geoffrey J. McLachlan
Publication date: 5 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1602.03683
Classification and discrimination; cluster analysis (statistical aspects) (62H30) General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (6)
Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models ⋮ Uniform consistency in nonparametric mixture models ⋮ Unnamed Item ⋮ A class of mixture of experts models for general insurance: theoretical developments ⋮ Conditional sum-product networks: modular probabilistic circuits via gate functions ⋮ A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models
Uses Software
Cites Work
- Unnamed Item
- Approximation of conditional densities by smooth mixtures of regressions
- Fitting finite mixtures of generalized linear regressions in \textsf{R}
- Estimating the dimension of a model
- Hierarchical mixtures-of-experts for exponential family regression models: Approximation and maximum likelihood estimation
- Laplace mixture of linear experts
- On Convergence Rates of Mixtures of Polynomial Experts
- New estimation and feature selection methods in mixture-of-experts models
- Error bounds for functional approximation and estimation using mixtures of experts
- On the asymptotic normality of hierarchical mixtures-of-experts for generalized linear models
- Approximation by superpositions of a sigmoidal function
This page was built for publication: A Universal Approximation Theorem for Mixture-of-Experts Models