Dynamic contracts and learning by doing (Q2351399): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
Set OpenAlex properties.
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s11579-014-0120-6 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2071817142 / rank
 
Normal rank

Revision as of 21:48, 19 March 2024

scientific article
Language Label Description Also known as
English
Dynamic contracts and learning by doing
scientific article

    Statements

    Dynamic contracts and learning by doing (English)
    0 references
    0 references
    23 June 2015
    0 references
    The author studies a game between a principal (which pays the employee and receives the product of her effort) and an agent (which obtains the wage and bears the productive effort). The information about effort is private to the agent -- the principal pays according to observed output, which is stochastic and depends on effort, human capital of the agent and random disturbances. It is assumed that the output \(Y_t\) at any moment \(t\) is described by the stochastic differential equation \(dY_t = (a_t + h_t)dt + \sigma dZ_t\) (where \(a_t\) is the effort, \(h_t\) the human capital and \(Z\) a Wiener process). The effort of the agent also increases deterministically her stock of human capital according to the equation \(dh_t = (a_t - \delta h_t)dt\), where \(\delta\) is a depreciation coefficient. The agent seeks to maximize her discounted total expected utility (which depends on effort and wages \(w\)) and the principal tries to maximize discounted total sum of output diminished by paid wages. The author develops an optimal contract for this setup. The two main theorems in the article give respectively the necessary and sufficient conditions for the optimal contract. The proofs are done by considering deviations from the optimal solution. The proofs also make use of the methods of stochastic calculus for Itô processes. Then the author assumes that the agent has CARA (constant absolute risk aversion) utility function: \(u(w, a)=-\exp(-\theta (w-\lambda a))\) and derives a detailed characterisation and properties of the optimal contract. He also shows that the solutions in finite-time horizons tend to the solution in infinite horizon.
    0 references
    0 references
    principal-agent model
    0 references
    human capital
    0 references
    learning-by-doing
    0 references
    optimal contract
    0 references
    stochastic optimization
    0 references
    stochastic production
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references