Queueing Network Controls via Deep Reinforcement Learning
From MaRDI portal
Publication:5084497
DOI10.1287/stsy.2021.0081zbMath1489.60145arXiv2008.01644OpenAlexW3047127288MaRDI QIDQ5084497
No author found.
Publication date: 24 June 2022
Published in: Stochastic Systems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.01644
Stochastic network models in operations research (90B15) Queueing theory (aspects of probability theory) (60K25) Queues and service in operations research (90B22)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Approximate linear programming for networks: average cost bounds
- An online actor-critic algorithm with function approximation for constrained Markov decision processes
- State space collapse with application to heavy traffic limits for multiclass queueing networks
- Heavy traffic analysis of a system with parallel servers: Asymptotic optimality of discrete-review policies
- Optimization of multiclass queueing networks: Polyhedral and nonlinear characterizations of achievable performance
- Convergence to equilibria for fluid models of head-of-the-line proportional processor sharing queueing networks
- Brownian models of open processing networks: Canonical representation of workload.
- Heavy traffic analysis of open processing networks with complete resource pooling: asymptotic optimality of discrete review policies
- Brownian models of multiclass queueing networks: Current status and open problems
- Re-entrant lines
- Value iteration and optimization of multiclass queueing networks
- Batch size effects on the efficiency of control variates in simulation
- Performance evaluation and policy selection in multiclass networks
- Dynamic scheduling of a system with two parallel servers in heavy traffic with resource pooling: Asymptotic optimality of a threshold policy
- Asymptotic optimality of tracking policies in stochastic networks.
- Discrete-review policies for scheduling stochastic networks: trajectory tracking and fluid-scale asymptotic optimality.
- A unified perturbation analysis framework for countable Markov chains
- Fluctuation smoothing policies are stable for stochastic re-entrant lines
- A fluid limit model criterion for instability of multiclass queueing networks
- Robust Fluid Processing Networks
- Uniformization for semi-Markov decision processes under stationary policies
- Technical Note—An Equivalence Between Continuous and Discrete Time Markov Decision Processes
- Performance Analysis of Queueing Networks via Robust Optimization
- Markov Chains and Stochastic Stability
- The Linear Programming Approach to Approximate Dynamic Programming
- Stable, distributed, real-time scheduling of flexible manufacturing/assembly/diassembly systems
- Heavy Traffic Convergence of a Controlled, Multiclass Queueing System
- Dynamic Scheduling of a Multiclass Fluid Network
- Performance bounds for queueing networks and scheduling policies
- OnActor-Critic Algorithms
- CONVERGENCE OF SIMULATION-BASED POLICY ITERATION
- Simulation-based optimization of Markov reward processes
- Applied Probability and Queues
- Variance reduction through smoothing and control variates for Markov chain simulations
- Processing Networks
- Scheduling Networks of Queues: Heavy Traffic Analysis of a Two-Station Closed Network
- Target-Pursuing Scheduling and Routing Policies for Multiclass Queueing Networks
- Approximating Martingales for Variance Reduction in Markov Process Simulation
This page was built for publication: Queueing Network Controls via Deep Reinforcement Learning