Continuity of cost in Borkar control topology and implications on discrete space and time approximations for controlled diffusions under several criteria
DOI10.1214/24-ejp1093arXiv2209.14982OpenAlexW4392051468MaRDI QIDQ6126973
Serdar Yüksel, Somnath Pradhan
Publication date: 10 April 2024
Published in: Electronic Journal of Probability (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2209.14982
Hamilton-Jacobi-Bellman equationcontrolled diffusionsnear optimalityfinite actionspiecewise constant policy
Control/observation systems governed by partial differential equations (93C20) Optimal stochastic control (93E20) Diffusion processes (60J60) Topological methods (93B24)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Piecewise constant policy approximations to Hamilton-Jacobi-Bellman equations
- Ergodic control of multidimensional diffusions. II: Adaptive control
- Probability methods for approximations in stochastic control and for elliptic equations
- Near optimality of quantized policies in stochastic control under weak continuity conditions
- Controlled diffusion processes
- Controlled diffusions with constraints
- Approximating value functions for controlled degenerate diffusion processes by using piece-wise constant policies.
- On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients
- Finite approximations in discrete-time stochastic control. Quantized models and asymptotic optimality
- A topology for Markov controls
- Mean value theorems for stochastic integrals
- Convex analytic method revisited: further optimality results and performance of deterministic policies in average cost stochastic control
- Improved order 1/4 convergence for piecewise constant policy approximation of stochastic control problems
- Occupation measures for controlled Markov processes: Characterization and optimality
- Multidimensional diffusion processes.
- A partial history of the early development of continuous-time nonlinear stochastic systems theory
- An extension of Tietze's theorem
- Finite Linear Programming Approximations of Constrained Discounted Markov Decision Processes
- Uniform Recurrence Properties of Controlled Diffusions and Applications to Optimal Control
- Ergodic Control of Diffusion Processes
- Optimal control of diffustion processes and hamilton-jacobi-bellman equations part I: the dynamic programming principle and application
- Optimal control of diffusion processes and hamilton–jacobi–bellman equations part 2 : viscosity solutions and uniqueness
- A remark on the attainable distributions of controlled diffusions
- Ergodic Control of Multidimensional Diffusions I: The Existence Results
- Convergence of discretization procedures in dynamic programming
- On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations
- On the Asymptotic Optimality of Finite Approximations to Markov Decision Processes with Borel Spaces
- Real Analysis and Probability
- Numerical Approximations for Stochastic Differential Games
- Asymptotic Optimality of Finite Model Approximations for Partially Observed Markov Decision Processes With Discounted Cost
- Performance Loss Bounds for Approximate Value Iteration with State Aggregation
- Error Bounds for Monotone Approximation Schemes for Hamilton--Jacobi--Bellman Equations
- Approximate Q Learning for Controlled Diffusion Processes and Its Near Optimality
This page was built for publication: Continuity of cost in Borkar control topology and implications on discrete space and time approximations for controlled diffusions under several criteria