Ambiguity aversion in multi-armed bandit problems
DOI10.1007/S11238-011-9259-2zbMATH Open1274.91110OpenAlexW2058587255MaRDI QIDQ656883FDOQ656883
Authors: Christopher Anderson
Publication date: 13 January 2012
Published in: Theory and Decision (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11238-011-9259-2
Recommendations
Statistical methods; economic indices and measures (91B82) Bayesian problems; characterization of Bayes procedures (62C10) Markov and semi-Markov decision processes (90C40) Experimental studies (91A90) Probabilistic games; gambling (91A60)
Cites Work
- Maxmin expected utility with non-unique prior
- Risk, ambiguity and the Savage axioms
- Title not available (Why is that?)
- Subjective Probability and Expected Utility without Additivity
- Recursive smooth ambiguity preferences
- Recent developments in modeling preferences: Uncertainty and ambiguity
- Ellsberg Revisited: An Experimental Study
- A Definition of Subjective Probability
- The Ellsberg Paradox and Risk Aversion: An Anticipated Utility Approach
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Bayesian analysis of human decision-making on bandit problems
- Price Differences in Almost Competitive Markets
- Good news and bad news: Search from unknown wage offer distributions
- An experimental analysis of the bandit problem
- Sequential Choice Under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem
Cited In (14)
- On the (non-) reliance on algorithms -- a decision-theoretic account
- Nonparametric learning rules from bandit experiments: the eyes have it!
- Ambiguity aversion and ambiguity content in decision making under uncertainty
- Capacity expansion under uncertainty in an oligopoly using indirect reinforcement-learning
- Robust experimentation in the continuous time bandit problem
- Learning and self-confirming long-run biases
- Ambiguity aversion in the small and in the large for weighted linear utility
- Risk aversion in expected intertemporal discounted utilities bandit problems
- Sequential Choice Under Ambiguity: Intuitive Solutions to the Armed-Bandit Problem
- The K-armed bandit problem with multiple priors
- Randomize at Your Own Risk: On the Observability of Ambiguity Aversion
- A note on optimal experimentation under risk aversion
- The effect of ambiguity aversion on reward scheme choice
- Updating Ambiguity Averse Preferences
This page was built for publication: Ambiguity aversion in multi-armed bandit problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q656883)