Structured Threshold Policies for Dynamic Sensor Scheduling—A Partially Observed Markov Decision Process Approach
DOI10.1109/TSP.2007.897908zbMATH Open1390.90335OpenAlexW2119940423MaRDI QIDQ4567480FDOQ4567480
Authors: Vikram Krishnamurthy, Dejan V. Djonin
Publication date: 27 June 2018
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tsp.2007.897908
Signal theory (characterization, reconstruction, filtering, etc.) (94A12) Markov and semi-Markov decision processes (90C40) Stochastic scheduling theory in operations research (90B36)
Cited In (7)
- A restless bandit model for resource allocation, competition, and reservation
- Networks of biosensors: decentralized activation and social learning
- Myopic bounds for optimal policy of POMDPs: an extension of lovejoy's structural results
- Title not available (Why is that?)
- Optimal Threshold Policies for Multivariate Stopping-Time POMDPs
- Planning for multiple measurement channels in a continuous-state POMDP
- UTS-based foresight optimization of sensor scheduling for low interception risk tracking
This page was built for publication: Structured Threshold Policies for Dynamic Sensor Scheduling—A Partially Observed Markov Decision Process Approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4567480)