Some Estimates for Finite Difference Approximations
DOI10.1137/0327031zbMATH Open0684.93088OpenAlexW2052165002MaRDI QIDQ4735120FDOQ4735120
Authors: José-Luis Menaldi
Publication date: 1989
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://digitalcommons.wayne.edu/mathfrp/34
Recommendations
- scientific article; zbMATH DE number 4174062
- An approximation scheme for the optimal control of diffusion processes
- On finite-difference approximations for normalized Bellman equations
- On the rate of convergence of finite-difference approximations for Bellman's equations with variable coefficients
- Probability methods for approximations in stochastic control and for elliptic equations
Hamilton-Jacobi-Bellman equationdiffusion processesfinite difference methodsdiscrete maximum principlecomplete filtered probability spacen-dimensional Wiener process
Numerical optimization and variational techniques (65K10) Discrete approximations in optimal control (49M25) Error analysis and interval analysis (65G99) Optimal stochastic control (93E20)
Cited In (27)
- A fast algorithm for the two dimensional HJB equation of stochastic control
- Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation
- Boundary treatment and multigrid preconditioning for semi-Lagrangian schemes applied to Hamilton-Jacobi-Bellman equations
- On finite-difference approximations for normalized Bellman equations
- A survey of numerical solutions for stochastic control problems: some recent progress
- Some non monotone schemes for Hamilton-Jacobi-Bellman equations
- Finite element approximation of some indefinite elliptic problems
- Probabilistic error analysis for some approximation schemes to optimal control problems
- Convergence rates of Markov chain approximation methods for controlled diffusions with stopping
- High-order filtered schemes for time-dependent second order HJB equations
- Title not available (Why is that?)
- Dynamic programming and error estimates for stochastic control problems with maximum cost
- A sparse Markov chain approximation of LQ-type stochastic control problems.
- Title not available (Why is that?)
- Error estimates for approximate solutions to Bellman equations associated with controlled jump-diffusions
- Semi-Lagrangian schemes for linear and fully non-linear diffusion equations
- Duality-based a posteriori error estimates for some approximation schemes for optimal investment problems
- Numerical solutions for optimal control of stochastic Kolmogorov systems with regime-switching and random jumps
- Title not available (Why is that?)
- Semi-Lagrangian discontinuous Galerkin schemes for some first- and second-order partial differential equations
- Fast computational procedure for solving multi-item single-machine lot scheduling optimzation problems
- On the convergence rate of approximation schemes for Hamilton-Jacobi-Bellman Equations
- Numerical analysis of strongly nonlinear PDEs
- An approximation scheme for the optimal control of diffusion processes
- ON THE RATE OF CONVERGENCE OF APPROXIMATION SCHEMES FOR BELLMAN EQUATIONS ASSOCIATED WITH OPTIMAL STOPPING TIME PROBLEMS
- Infinite horizon stochastic optimal control problems with running maximum cost
- Rate of convergence of finite difference approximations for degenerate ordinary differential equations
This page was built for publication: Some Estimates for Finite Difference Approximations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4735120)