The Bayesian process control with multiple assignable causes
From MaRDI portal
Publication:6237933
arXiv1212.2523MaRDI QIDQ6237933FDOQ6237933
Publication date: 11 December 2012
Abstract: We study an optimal process control problem with multiple assignable causes. The process is initially in-control but is subject to random transition to one of multiple out-of-control states due to assignable causes. The objective is to find an optimal stopping rule under partial observation that maximizes the total expected reward in infinite horizon. The problem is formulated as a partially observable Markov decision process (POMDP) with the belief space consisting of state probability vectors. New observations are obtained at fixed sampling interval to update the belief vector using Bayes' theorem. Under standard assumptions, we show that a conditional control limit policy is optimal and that there exists a convex, non-increasing control limit that partitions the belief space into two individually connected control regions: a stopping region and a continuation region. We further derive the analytical bounds for the control limit. An algorithm is devised based on structural results, which considerably reduces the computation. We also shed light on the selection of optimal fixed sampling interval.
This page was built for publication: The Bayesian process control with multiple assignable causes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6237933)