Random Horizon Principal-Agent Problems

From MaRDI portal
Publication:5037495

DOI10.1137/20M1321620zbMATH Open1483.91123arXiv2002.10982OpenAlexW4211204780MaRDI QIDQ5037495FDOQ5037495


Authors: Zhenjie Ren, Nizar Touzi, Junjian Yang, Yiqing Lin Edit this on Wikidata


Publication date: 1 March 2022

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: We consider a general formulation of the random horizon Principal-Agent problem with a continuous payment and a lump-sum payment at termination. In the European version of the problem, the random horizon is chosen solely by the principal with no other possible action from the agent than exerting effort on the dynamics of the output process. We also consider the American version of the contract, which covers the seminal Sannikov's model, where the agent can also quit by optimally choosing the termination time of the contract. Our main result reduces such non-zero-sum stochastic differential games to appropriate stochastic control problems which may be solved by standard methods of stochastic control theory. This reduction is obtained by following Sannikov's approach, further developed by Cvitanic, Possamai, and Touzi. We first introduce an appropriate class of contracts for which the agent's optimal effort is immediately characterized by the standard verification argument in stochastic control theory. We then show that this class of contracts is dense in an appropriate sense so that the optimization over this restricted family of contracts represents no loss of generality. The result is obtained by using the recent well-posedness result of random horizon second-order backward SDE.


Full work available at URL: https://arxiv.org/abs/2002.10982




Recommendations




Cites Work


Cited In (16)





This page was built for publication: Random Horizon Principal-Agent Problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037495)