The following pages link to Approximate Dynamic Programming (Q5310431):
Displayed 50 items.
- Planning for multiple measurement channels in a continuous-state POMDP (Q360261) (← links)
- Capacity allocation for demand of different customer-product-combinations with cancellations, no-shows, and overbooking when there is a sequential delivery of service (Q363587) (← links)
- An approximate dynamic programming framework for modeling global climate policy under decision-dependent uncertainty (Q373210) (← links)
- Generalized Markov models of infectious disease spread: a novel framework for developing dynamic health policies (Q420890) (← links)
- The optimal control of just-in-time-based production and distribution systems and performance comparisons with optimized pull systems (Q421584) (← links)
- Network revenue management with inventory-sensitive bid prices and customer choice (Q421775) (← links)
- Approximate dynamic programming for capacity allocation in the service industry (Q439484) (← links)
- Approximate dynamic programming with Bézier curves/surfaces for top-percentile traffic routing (Q439572) (← links)
- Fitting piecewise linear continuous functions (Q439615) (← links)
- Optimal tracking control of nonlinear partially-unknown constrained-input systems using integral reinforcement learning (Q458982) (← links)
- Integral reinforcement learning and experience replay for adaptive optimal control of partially-unknown constrained-input continuous-time systems (Q463819) (← links)
- A sparse collocation method for solving time-dependent HJB equations using multivariate \(B\)-splines (Q466457) (← links)
- Sampled fictitious play for approximate dynamic programming (Q547121) (← links)
- The method of endogenous gridpoints with occasionally binding constraints among endogenous variables (Q602989) (← links)
- Model-free \(H_{\infty }\) control design for unknown linear discrete-time systems via Q-learning with LMI (Q608439) (← links)
- Suboptimal solutions to dynamic optimization problems via approximations of the policy functions (Q613579) (← links)
- Minimizing total tardiness in a stochastic single machine scheduling problem using approximate dynamic programming (Q633553) (← links)
- Stochastic control via direct comparison (Q633815) (← links)
- Multi-player non-zero-sum games: online adaptive learning solution of coupled Hamilton-Jacobi equations (Q642894) (← links)
- Computation of approximate optimal policies in a partially observed inventory model with rain checks (Q642900) (← links)
- Optimal Bayesian strategies for the infinite-armed Bernoulli bandit (Q643377) (← links)
- A dynamic programming strategy to balance exploration and exploitation in the bandit problem (Q647433) (← links)
- Approximation of Markov decision processes with general state space (Q663675) (← links)
- Integral \(Q\)-learning and explorized policy iteration for adaptive optimal control of continuous-time linear systems (Q694822) (← links)
- Approximate dynamic programming via direct search in the space of value function approximations (Q713118) (← links)
- Discrete neural dynamic programming in wheeled mobile robot control (Q718550) (← links)
- Optimal learning for sequential sampling with non-parametric beliefs (Q742143) (← links)
- Energy efficiency and risk management in public buildings: strategic model for robust planning (Q744254) (← links)
- Efficient computer experiment-based optimization through variable selection (Q744721) (← links)
- Synergies of operations research and data mining (Q976388) (← links)
- Optimally maintaining a Markovian deteriorating system with limited imperfect repairs (Q976454) (← links)
- Partially observable Markov decision process approximations for adaptive sensing (Q977009) (← links)
- Monotone optimal replacement policies for a Markovian deteriorating system in a controllable environment (Q991463) (← links)
- An approximate dynamic programming approach for the vehicle routing problem with stochastic demands (Q1027533) (← links)
- Resource-constrained management of heterogeneous assets with stochastic deterioration (Q1042122) (← links)
- A stochastic control formalism for dynamic biologically conformal radiation therapy (Q1926672) (← links)
- Optimal patient and personnel scheduling policies for care-at-home service facilities (Q1926675) (← links)
- Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming (Q1926680) (← links)
- Dynamic programming and value-function approximation in sequential decision problems: error analysis and numerical results (Q1949593) (← links)
- Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains (Q1959632) (← links)
- Dynamic multi-appointment patient scheduling for radiation therapy (Q2253376) (← links)
- A tutorial on adaptive design optimization (Q2437248) (← links)
- Sharpe-ratio pricing and hedging of contingent claims in incomplete markets by convex programming (Q2440802) (← links)
- Computational bounds for elevator control policies by large scale linear programming (Q2441574) (← links)
- Top-percentile traffic routing problem by dynamic programming (Q2443403) (← links)
- Approximate dynamic programming for stochastic \(N\)-stage optimization with application to optimal consumption under uncertainty (Q2450902) (← links)
- Particle methods for stochastic optimal control problems (Q2636612) (← links)
- VALUING CALLABLE AND PUTABLE REVENUE-PERFORMANCE-LINKED PROJECT BACKED SECURITIES (Q2786035) (← links)
- An approximate dynamic programming method for multi-input multi-output nonlinear system (Q2857151) (← links)
- Approximate policy iteration: a survey and some new methods (Q2887629) (← links)