A comparison of models for learning how to dynamically integrate multiple cues in order to forecast continuous criteria
From MaRDI portal
Publication:2517780
DOI10.1016/J.JMP.2008.01.009zbMath1151.91755OpenAlexW2074706172WikidataQ62113258 ScholiaQ62113258MaRDI QIDQ2517780
Jerome R. Busemeyer, Hugh Kelley
Publication date: 9 January 2009
Published in: Journal of Mathematical Psychology (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmp.2008.01.009
Neural networks for/in biological studies, artificial life and related topics (92B20) Memory and learning in psychology (91E40)
Related Items (1)
Cites Work
- Convergence of least squares learning mechanisms in self-referential linear stochastic models
- Broadening the tests of learning models
- Statistical tests for comparing possibly misspecified and nonnested models
- Asymptotic Inference for Mixture Models by Using Data-Dependent Priors
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A comparison of models for learning how to dynamically integrate multiple cues in order to forecast continuous criteria