OL-DEC-MDP model for multiagent online scheduling with a time-dependent probability of success (Q1719084)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: OL-DEC-MDP model for multiagent online scheduling with a time-dependent probability of success |
scientific article; zbMATH DE number 7017197
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | OL-DEC-MDP model for multiagent online scheduling with a time-dependent probability of success |
scientific article; zbMATH DE number 7017197 |
Statements
OL-DEC-MDP model for multiagent online scheduling with a time-dependent probability of success (English)
0 references
8 February 2019
0 references
Summary: Focusing on the on-line multiagent scheduling problem, this paper considers the time-dependent probability of success and processing duration and proposes an OL-DEC-MDP (opportunity loss-decentralized Markov Decision Processes) model to include opportunity loss into scheduling decision to improve overall performance. The success probability of job processing as well as the process duration is dependent on the time at which the processing is started. The probability of completing the assigned job by an agent would be higher when the process is started earlier, but the opportunity loss could also be high due to the longer engaging duration. As a result, OL-DEC-MDP model introduces a reward function considering the opportunity loss, which is estimated based on the prediction of the upcoming jobs by a sampling method on the job arrival. Heuristic strategies are introduced in computing the best starting time for an incoming job by each agent, and an incoming job will always be scheduled to the agent with the highest reward among all agents with their best starting policies. The simulation experiments show that the OL-DEC-MDP model will improve the overall scheduling performance compared with models not considering opportunity loss in heavy-loading environment.
0 references
0 references
0 references
0 references
0 references
0 references
0.84696144
0 references
0.8461026
0 references
0.84219676
0 references
0.83215183
0 references
0.82981086
0 references
0.8228593
0 references
0.8210894
0 references