On Sequential Designs for Maximizing the Sum of $n$ Observations

From MaRDI portal
Publication:3234931


DOI10.1214/aoms/1177728073zbMath0073.14203MaRDI QIDQ3234931

Russell N. Bradt, Selmer Johnson, Samuel Karlin

Publication date: 1956

Published in: The Annals of Mathematical Statistics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1214/aoms/1177728073



Related Items

Optimistic Gittins Indices, An asymptotically optimal heuristic for general nonstationary finite-horizon restless multi-armed, multi-action bandits, Sequentielle Versuchspläne, Adaptive competitive decision in repeated play of a matrix game with uncertain entries, A limit property of sequential decision process, A central limit theorem, loss aversion and multi-armed bandits, One-armed bandit problem for parallel data processing systems, Sequentielle Versuchs-Pläne, A perpetual search for talents across overlapping generations: a learning process, Dynamic priority allocation via restless bandit marginal productivity indices, Learning to signal: Analysis of a micro-level reinforcement model, Bernoulli two-armed bandits with geometric termination, The apparent conflict between estimation and control - a survey of the two-armed bandit problem, On the two armed bandit with one probability known, Small-sample performance of Bernoulli two-armed bandit Bayesian strategies, On a theorem of Kelley, Herbert Robbins and sequential analysis, On Bayesian index policies for sequential resource allocation, On monotone optimal decision rules and the stay-on-a-winner rule for the two-armed bandit, On the optimal amount of experimentation in sequential decision problems, Ein Irrfahrten-Problem und seine Anwendung auf die Theorie der sequentiellen Versuchs-Pläne, Some problems of optimal sampling strategy, Strategic learning in teams, Comparison of two Bernoulli processes by multiple stage sampling using Bayesian decision theory, Covariate models for bernoulli bandits, A Two-Armed Bandit Problem with possibility of no Information, Dynamic allocation policies for the finite horizon one armed bandit problem