Rationally inattentive control of Markov processes

From MaRDI portal
Publication:2802080

DOI10.1137/15M1008476zbMATH Open1360.93785arXiv1502.03762MaRDI QIDQ2802080FDOQ2802080


Authors: Ehsan Shafieepoorfard, Maxim Raginsky, Sean P. Meyn Edit this on Wikidata


Publication date: 25 April 2016

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: The article poses a general model for optimal control subject to information constraints, motivated in part by recent work of Sims and others on information-constrained decision-making by economic agents. In the average-cost optimal control framework, the general model introduced in this paper reduces to a variant of the linear-programming representation of the average-cost optimal control problem, subject to an additional mutual information constraint on the randomized stationary policy. The resulting optimization problem is convex and admits a decomposition based on the Bellman error, which is the object of study in approximate dynamic programming. The theory is illustrated through the example of information-constrained linear-quadratic-Gaussian (LQG) control problem. Some results on the infinite-horizon discounted-cost criterion are also presented.


Full work available at URL: https://arxiv.org/abs/1502.03762




Recommendations




Cites Work


Cited In (9)





This page was built for publication: Rationally inattentive control of Markov processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2802080)