New estimation and feature selection methods in mixture-of-experts models
From MaRDI portal
Publication:3086511
DOI10.1002/cjs.10083zbMath1349.62071OpenAlexW2112335378MaRDI QIDQ3086511
Publication date: 30 March 2011
Published in: Canadian Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/cjs.10083
Asymptotic properties of parametric estimators (62F12) Ridge regression; shrinkage estimators (Lasso) (62J07) General nonlinear regression (62J02)
Related Items (16)
Simultaneous variable selection and de-coarsening in multi-path change-point models ⋮ Model selection for the localized mixture of experts models ⋮ Robust variable selection in finite mixture of regression models using the t distribution ⋮ A globally convergent algorithm for Lasso-penalized mixture of linear regression models ⋮ Estimation and variable selection for mixture of joint mean and variance models ⋮ Sparse principal component regression with adaptive loading ⋮ New estimation in mixture of experts models using the Pearson type VII distribution ⋮ Hybrid Hard-Soft Screening for High-dimensional Latent Class Analysis ⋮ Mixture Composite Regression Models with Multi-type Feature Selection ⋮ A Universal Approximation Theorem for Mixture-of-Experts Models ⋮ Prediction with a flexible finite mixture-of-regressions ⋮ Regularized Estimation and Feature Selection in Mixtures of Gaussian-Gated Experts Models ⋮ Robust variable selection for finite mixture regression models ⋮ Robust estimation for the varying coefficient partially nonlinear models ⋮ Variable selection for skew-normal mixture of joint location and scale models ⋮ Variable selection in finite mixture of regression models using the skew-normal distribution
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimating the dimension of a model
- Identifiability of models for clusterwise linear regression
- Hierarchical mixtures-of-experts for exponential family regression models: Approximation and maximum likelihood estimation
- Nonconcave penalized likelihood with a diverging number of parameters.
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Variable selection using MM algorithms
- Nonparametric Bayes Conditional Distribution Modeling With Variable Selection
- On Consistency of Bayesian Inference with Mixtures of Logistic Regression
- Variable Selection in Finite Mixture of Regression Models
- Penalized logistic regression for detecting gene interactions
- Bayesian Inference in Mixtures-of-Experts and Hierarchical Mixtures-of-Experts Models With an Application to Speech Recognition
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Regularization and Variable Selection Via the Elastic Net
- Tuning parameter selectors for the smoothly clipped absolute deviation method
This page was built for publication: New estimation and feature selection methods in mixture-of-experts models