Exploratory Control with Tsallis Entropy for Latent Factor Models
From MaRDI portal
Publication:6200515
Abstract: We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis Entropy and derive the optimal distribution over states - which we prove is -Gaussian distributed with location characterized through the solution of an FBSE and FBSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft -learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.
Recommendations
Cites work
- scientific article; zbMATH DE number 7307478 (Why is no real title available?)
- Backward Stochastic Differential Equations in Finance
- Entropy Regularization for Mean Field Games with Learning
- Error expansion for the discretization of backward stochastic differential equations
- Exploratory HJB equations and their convergence
- Exploratory LQG mean field games with entropy regularization
- Generalized Box–MÜller Method for Generating $q$-Gaussian Random Deviates
- Possible generalization of Boltzmann-Gibbs statistics.
- Reinforcement learning and stochastic optimisation
- State-Dependent Temperature Control for Langevin Diffusions
- Stochastic differential equations for the non linear filtering problem
- The nonadditive entropy S_q and its applications in physics and elsewhere: some remarks
This page was built for publication: Exploratory Control with Tsallis Entropy for Latent Factor Models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6200515)