Approximation and regular perturbation of optimal control problems via Hamilton-Jacobi theory (Q1179159)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Approximation and regular perturbation of optimal control problems via Hamilton-Jacobi theory
scientific article

    Statements

    Approximation and regular perturbation of optimal control problems via Hamilton-Jacobi theory (English)
    0 references
    0 references
    0 references
    26 June 1992
    0 references
    This paper is based on some new results on the convergence of the viscosity solutions of the Hamilton-Jacobi equation \(u_ h+H_ h(x,u_ h,Du_ h) = 0\) to the solutions of \(u+H(x,u,Du) = 0\) when \(h \to 0\). In particular, under suitable assumptions the authors give an estimation of the convergence rate \(\sup | u(x) - u_ h(x)| \leq Ch^ \beta\) for some \(\beta > 0\). These results are proved to be useful in various problems arising in control theory. Their applications cover in particular infinite horizon control, problems with exit times, two- players zero-sum games, pursuit and evasion problems and time optimal problems. In particular, the authors apply their results to the study of certain regularization procedures for the minimal time function in the case of linear processes. More generally, they consider the effect on the minimal time function when the system equations are subject to small perturbations.
    0 references
    convergence
    0 references
    viscosity solutions
    0 references
    Hamilton-Jacobi equation
    0 references
    infinite horizon control
    0 references
    exit times
    0 references
    time optimal problems
    0 references
    small perturbations
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references