Exploratory Control with Tsallis Entropy for Latent Factor Models

From MaRDI portal
Publication:6200515

DOI10.1137/22M153505XarXiv2211.07622WikidataQ128694921 ScholiaQ128694921MaRDI QIDQ6200515FDOQ6200515


Authors: Ryan Donnelly, Sebastian Jaimungal Edit this on Wikidata


Publication date: 22 March 2024

Published in: SIAM Journal on Financial Mathematics (Search for Journal in Brave)

Abstract: We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis Entropy and derive the optimal distribution over states - which we prove is q-Gaussian distributed with location characterized through the solution of an FBSDeltaE and FBSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft Q-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.


Full work available at URL: https://arxiv.org/abs/2211.07622




Recommendations




Cites Work






This page was built for publication: Exploratory Control with Tsallis Entropy for Latent Factor Models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6200515)