Two-phase iteration for value function approximation and hyperparameter optimization in Gaussian-kernel-based adaptive critic design (Q1666524)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Two-phase iteration for value function approximation and hyperparameter optimization in Gaussian-kernel-based adaptive critic design |
scientific article |
Statements
Two-phase iteration for value function approximation and hyperparameter optimization in Gaussian-kernel-based adaptive critic design (English)
0 references
27 August 2018
0 references
Summary: Adaptive Dynamic Programming (ADP) with critic-actor architecture is an effective way to perform online learning control. To avoid the subjectivity in the design of a neural network that serves as a critic network, kernel-based adaptive critic design (ACD) was developed recently. There are two essential issues for a static kernel-based model: how to determine proper hyperparameters in advance and how to select right samples to describe the value function. They all rely on the assessment of sample values. Based on the theoretical analysis, this paper presents a two-phase simultaneous learning method for a Gaussian-kernel-based critic network. It is able to estimate the values of samples without infinitively revisiting them. And the hyperparameters of the kernel model are optimized simultaneously. Based on the estimated sample values, the sample set can be refined by adding alternatives or deleting redundances. Combining this critic design with actor network, we present a Gaussian-kernel-based Adaptive Dynamic Programming (GK-ADP) approach. Simulations are used to verify its feasibility, particularly the necessity of two-phase learning, the convergence characteristics, and the improvement of the system performance by using a varying sample set.
0 references
0 references
0 references
0 references
0 references