Sequential Choice Under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem
From MaRDI portal
Publication:4865407
DOI10.1287/MNSC.41.5.817zbMATH Open0843.90005OpenAlexW1987360242MaRDI QIDQ4865407FDOQ4865407
Authors: Robert J. Meyer, Yong Shi
Publication date: 13 February 1996
Published in: Management Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/mnsc.41.5.817
Recommendations
- An experimental analysis of the bandit problem
- Dynamic decision making under ambiguity: an experimental investigation
- Ambiguity aversion in multi-armed bandit problems
- The K-armed bandit problem with multiple priors
- Dynamic decision-making under uncertainty: an experimental investigation of choices between accumulator gambles
heuristicslearningrisk takingdecision making under uncertaintysequential decision analysisarmed-bandit problemdynamic decision theoryoptimal Bernoulli samplerrepeated choices
Cited In (15)
- Nonparametric learning rules from bandit experiments: the eyes have it!
- A general latent assignment approach for modeling psychological contaminants
- Search and active learning with correlated information: empirical evidence from mid-Atlantic clam fishermen
- A Bayesian analysis of human decision-making on bandit problems
- The impact of experience on decisions based on pre-choice samples and the face-or-cue hypothesis
- Optimal selection of obsolescence mitigation strategies using a restless bandit model
- Robust experimentation in the continuous time bandit problem
- Unfazed by both the bull and bear: strategic exploration in dynamic environments
- Using adaptive learning in credit scoring to estimate take-up probability distribution
- Title not available (Why is that?)
- Multi-state choices with aggregate feedback on unfamiliar alternatives
- The K-armed bandit problem with multiple priors
- Learning, risk attitude and hot stoves in restless bandit problems
- Ambiguity aversion in multi-armed bandit problems
- Meaningful learning in weighted voting games: an experiment
This page was built for publication: Sequential Choice Under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4865407)