Choquet Regularization for Continuous-Time Reinforcement Learning

From MaRDI portal
Publication:6073554

DOI10.1137/22M1524734arXiv2208.08497OpenAlexW4386750089MaRDI QIDQ6073554FDOQ6073554

Ruodu Wang, Xun Yu Zhou, Xia Han

Publication date: 11 October 2023

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: We propose emph{Choquet regularizers} to measure and manage the level of exploration for reinforcement learning (RL), and reformulate the continuous-time entropy-regularized RL problem of Wang et al. (2020, JMLR, 21(198)) in which we replace the differential entropy used for regularization with a Choquet regularizer. We derive the Hamilton--Jacobi--Bellman equation of the problem, and solve it explicitly in the linear--quadratic (LQ) case via maximizing statically a mean--variance constrained Choquet regularizer. Under the LQ setting, we derive explicit optimal distributions for several specific Choquet regularizers, and conversely identify the Choquet regularizers that generate a number of broadly used exploratory samplers such as epsilon-greedy, exponential, uniform and Gaussian.


Full work available at URL: https://arxiv.org/abs/2208.08497







Cites Work


Cited In (3)





This page was built for publication: Choquet Regularization for Continuous-Time Reinforcement Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6073554)