zbMath0471.93002MaRDI QIDQ1158123
Dimitri P. Bertsekas, Steven E. Shreve
Publication date: 1978
Published in: Mathematics in Science and Engineering (Search for Journal in Brave)
Partial hedging of American options in discrete time and complete markets: convex duality and optimal Markov policies,
On the optimality equation for average cost Markov control processes with Feller transition probabilities,
Continuous-time limit of dynamic games with incomplete information and a more informed player,
Non-paternalistic intergenerational altruism revisited,
Discounted Markov decision processes with fuzzy costs,
On redundant types and Bayesian formulation of incomplete information,
Local asymptotics for controlled martingales,
Almost-sure hedging with permanent price impact,
Conditions for the solvability of the linear programming formulation for constrained discounted Markov decision processes,
Numerical methods for the pricing of swing options: a stochastic control approach,
Optimal investment and reinsurance strategy,
Convex analytic approach to constrained discounted Markov decision processes with non-constant discount factors,
Optimality in Feller semi-Markov control processes,
Approximation of noncooperative semi-Markov games,
Markov stationary equilibria in stochastic supermodular games with imperfect private and public information,
Optimal transportation under controlled stochastic dynamics,
Risk-averse dynamic programming for Markov decision processes,
The policy iteration algorithm for average continuous control of piecewise deterministic Markov processes,
Choosing optimal road trajectory with random work cost in different areas,
Constructions of Nash equilibria in stochastic games of resource extraction with additive transition structure,
Characterization and computation of infinite-horizon specifications over Markov processes,
On Nikaido-Isoda type theorems for discounted stochastic games,
First passage problems for nonstationary discrete-time stochastic control systems,
Markov control models with unknown random state-action-dependent discount factors,
Robust Markov control processes,
Stationary Markov perfect equilibria in risk sensitive stochastic overlapping generations models,
Near optimality of quantized policies in stochastic control under weak continuity conditions,
Measurability of semimartingale characteristics with respect to the probability law,
Exchangeable capacities, parameters and incomplete theories,
Mathematical modeling of distributed catastrophic and terrorist risks,
Knows what it knows: a framework for self-aware learning,
Model selection in reinforcement learning,
Joint pricing and inventory replenishment decisions with returns and expediting,
Discounted dynamic programming with unbounded returns: application to economic models,
Performance analysis for controlled semi-Markov systems with application to maintenance,
Nonparametric adaptive control of discounted stochastic systems with compact state space,
Stochastic optimal control of unknown linear networked control system in the presence of random delays and packet losses,
Asymptotic analysis of value prediction by well-specified and misspecified models,
Average control of Markov decision processes with Feller transition probabilities and general action spaces,
Multiperiod mean-variance portfolio optimization via market cloning,
An optimal execution problem with market impact,
Discounted continuous-time Markov decision processes with unbounded rates and randomized history-dependent policies: the dynamic programming approach,
More on equilibria in competitive markets with externalities and a continuum of agents,
Distributions of multi-estimates for multistage stochastic inclusions,
Markov decision processes on Borel spaces with total cost and random horizon,
Discounted continuous-time constrained Markov decision processes in Polish spaces,
Asset market games of survival: a synthesis of evolutionary and dynamic games,
Optimal arbitrage under model uncertainty,
Partially observed semi-Markov zero-sum games with average payoff,
Approximation of Markov decision processes with general state space,
When do cylinder \(\sigma \)-algebras equal Borel \(\sigma \)-algebras in Polish spaces?,
Strategic market games with cyclic endowments,
Policy iteration algorithms for zero-sum stochastic differential games with long-run average payoff criteria,
Convergence of probability measures and Markov decision models with incomplete information,
Nonconvex homogenization for one-dimensional controlled random walks in random potential,
Exponential utility maximization under model uncertainty for unbounded endowments,
Stochastic finite-state systems in control theory,
Completely mixed strategies for two structured classes of semi-Markov games, principal pivot transform and its generalizations,
Quantitative model-checking of controlled discrete-time Markov processes,
Controlled Markov decision processes with AVaR criteria for unbounded costs,
Stochastic games for continuous-time jump processes under finite-horizon payoff criterion,
Planning for optimal control and performance certification in nonlinear systems with controlled or uncontrolled switches,
A sample-path approach to the optimality of echelon order-up-to policies in serial inventory systems,
Particle system algorithm and chaos propagation related to non-conservative McKean type stochastic differential equations,
Multivariate Bayesian process control for a finite production run,
Constrained BSDEs representation of the value function in optimal control of pure jump Markov processes,
Stochastic games with unbounded payoffs: applications to robust control in economics,
Accuracy of fluid approximations to controlled birth-and-death processes: absorbing case,
On the adaptive control of a class of partially observed Markov decision processes,
MDP algorithms for portfolio optimization problems in pure jump markets,
The obstacle version of the geometric dynamic programming principle: application to the pricing of American options under constraints,
An excursion-theoretic approach to stability of discrete-time stochastic hybrid systems,
Singular perturbation for the discounted continuous control of piecewise deterministic Markov processes,
Zero-sum stochastic games with partial information,
A risk reserve model for hedging in incomplete markets,
Stochastic control methods: Hedging in a market described by pure jump processes,
On a continuous solution to the Bellman-Poisson equation in stochastic games,
Semiparametric estimation of Markov decision processes with continuous state space,
Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path,
Discrete-time probabilistic approximation of path-dependent stochastic control problems,
Robust superhedging with jumps and diffusion,
Policy gradient in Lipschitz Markov decision processes,
Optimal stopping under adverse nonlinear expectation and related games,
Weak approximation of second-order BSDEs,
Impulse control problem on finite horizon with execution delay,
Semicontinuous nonstationary stochastic games. II,
Subjective random discounting and intertemporal choice,
Fixed points for extrema of contractions,
Existence of optimal stationary policies in deterministic optimal control,
Zero-sum ergodic semi-Markov games with weakly continuous transition probabilities,
Optimal control of Markovian jump processes with partial information and applications to a parallel queueing model,
An approximation result for normal integrands and applications to relaxed controls theory,
Stochastic control theory and operational research,
Optimal research and development expenditures under an incremental tax incentive scheme,
Lower closure for orientor fields by lower semicontinuity of outer integral functionals,
On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming,
A theory of rolling horizon decision making,
On essential information in sequential decision processes,
A limit theorem for Markov decision processes,
Consistent price systems under model uncertainty,
Risk-averse stochastic optimal control: an efficiently computable statistical upper bound,
Optimal recovery of a square integrable function from its observations with Gaussian errors,
Nonparametric Adaptive Robust Control under Model Uncertainty,
Optimal stopping under model ambiguity: A time‐consistent equilibrium approach,
Weak transport for non‐convex costs and model‐independence in a fixed‐income market,
Observation-based filtering of state of a nonlinear dynamical system with random delays,
Formalization of methods for the development of autonomous artificial intelligence systems,
Non-Markovian impulse control under nonlinear expectation,
Zero-sum stochastic games with the average-value-at-risk criterion,
Semi-uniform Feller stochastic kernels,
Smoothing policies and safe policy gradients,
Equivalent conditions for weak continuity of nonlinear filters,
Me, myself and I: a general theory of non-Markovian time-inconsistent stochastic control for sophisticated agents,
Adapted topologies and higher rank signatures,
Data-driven nonparametric robust control under dependence uncertainty,
Quantitative propagation of chaos for mean field Markov decision process with common noise,
Dynamic Cournot-Nash equilibrium: the non-potential case,
MODEL-FREE WEAK NO-ARBITRAGE AND SUPERHEDGING UNDER TRANSACTION COSTS BEYOND EFFICIENT FRICTION,
SOCKS: A Stochastic Optimal Control and Reachability Toolbox Using Kernel Methods,
A stochastic target problem for branching diffusion processes,
Optimal stopping with expectation constraints,
Reward Maximization Through Discrete Active Inference,
Short Communication: Existence of Markov Equilibrium Control in Discrete Time,
Nonzero-Sum Stochastic Impulse Games with an Application in Competitive Retail Energy Markets,
Some advances on constrained Markov decision processes in Borel spaces with random state-dependent discount factors,
Optimal control of path-dependent McKean-Vlasov SDEs in infinite-dimension,
Layered networks, equilibrium dynamics, and stable coalitions,
Correction to: ``Layered networks, equilibrium dynamics, and stable coalitions, Extreme Occupation Measures in Markov Decision Processes with an Absorbing State, Dynamics of market making algorithms in dealer markets: Learning and tacit collusion, Interval Markov Decision Processes with Continuous Action-Spaces, Unnamed Item, Unnamed Item, Markov decision processes, On Markov policies for minimax decision processes, Bisimulation for Markov Decision Processes through Families of Functional Expressions, On Fatou's lemma and parametric integrals for set-valued functions, Bid-Ask Spread Modelling, a Perturbation Approach, On Discrete-Time Dynamic Programming in Insurance: Exponential Utility and Minimizing the Ruin Probability, Maximizing terminal utility by controlling risk exposure; a discrete-time dynamic control approach, Stochastic Inventory Models with Limited Production Capacity and Periodically Varying Parameters, Utility Functions Which Ensure the Adequacy of Stationary Strategies, Optimal Radio-Mode Switching for Wireless Networked Control, On the Existence of Optimal Policies for a Class of Static and Sequential Dynamic Teams, On Convergence of Value Iteration for a Class of Total Cost Markov Decision Processes, Dynamic Programming Subject to Total Variation Distance Ambiguity, Ergodic risk-sensitive control of Markov processes on countable state space revisited, A STOCHASTIC CONTROL APPROACH TO BID-ASK PRICE MODELLING, The Exploration-Exploitation Trade-off in the Newsvendor Problem, Partially observable Markov decision processes with partially observable random discount factors, Continuous time markov decision processes with interventions, Approximate policy iteration: a survey and some new methods, A review of stochastic algorithms with continuous value function approximation and some new approximate policy iteration algorithms for multidimensional continuous applications, Bounds for the regret loss in dynamic programming under adaptive control, The Expected Total Cost Criterion for Markov Decision Processes under Constraints: A Convex Analytic Approach, A stochastic games framework for verification and control of discrete time stochastic hybrid systems, Unnamed Item, A turnpike improvement algorithm for piecewise deterministic control, Randomized and Relaxed Strategies in Continuous-Time Markov Decision Processes, Particle methods for stochastic optimal control problems, Non-randomized strategies in stochastic decision processes, On the average cost optimality equation and the structure of optimal policies for partially observable Markov decision processes, On the computation of the optimal cost function for discrete time Markov models with partial observations, Density estimation and adaptive control of Markov processes: Average and discounted criteria, Finite- and Infinite-Horizon Shapley Games with Nonsymmetric Partial Observation, Control of Interbank Contagion Under Partial Information, Two adaptively stepped monotone algorithms for solving discounted dynamic programming equations, Controlled Markov processes on the infinite planning horizon: Weighted and overtaking cost criteria, New exactly solvable examples for controlled discrete-time Markov chains, Nonexponential Sanov and Schilder theorems on Wiener space: BSDEs, Schrödinger problems and control, On the convergence of closed-loop Nash equilibria to the mean field game limit, Aggregated occupation measures and linear programming approach to constrained impulse control problems, Censored lifetime learning: optimal Bayesian age-replacement policies, Stochastic reachability of a target tube: theory and computation, Stochastic scheduling problems I — General strategies, Use of Approximations of Hamilton-Jacobi-Bellman Inequality for Solving Periodic Optimization Problems, Optimality Conditions for Partially Observable Markov Decision Processes, Pathwise superhedging under proportional transaction costs, The Repair VS. Replacement problem: A stochastic control approach, Convex Analysis in Decentralized Stochastic Control, Strategic Measures, and Optimal Solutions, Diffusive limit approximation of pure-jump optimal stochastic control problems, Theoretical foundations of planning and navigation for autonomous robots, On approximate and weak correlated equilibria in constrained discounted stochastic games, Guaranteed deterministic approach to superhedging: most unfavorable scenarios of market behavior and the moment problem, Discrete-time zero-sum games for Markov chains with risk-sensitive average cost criterion, A Probabilistic Representation for the Value of Zero-Sum Differential Games with Incomplete Information on Both Sides, Sequential stochastic control (single or multi-agent) problems nearly admit change of measures with independent measurement, Bellman inequalities in markov decision deterministic drift processes, NASH EQUILIBRIA IN UNCONSTRAINED STOCHASTIC GAMES OF RESOURCE EXTRACTION, Active sequential hypothesis testing, A note on the \({\sigma}\)-compactness of sets of probability measures on metric spaces, Constructing sublinear expectations on path space, On stochastic programming ii: dynamic problems under risk∗, Inflationary equilibrium in a stochastic economy with independent agents, On the effect of perturbation of conditional probabilities in total variation, Semi-stationary Equilibrium Strategies in Non-cooperative N-person Semi-Markov Games, On stochastic games in economics, Minimum-variance control of astronomical adaptive optic systems with actuator dynamics under synchronous and asynchronous sampling, Deterministic and stochastic optimization problems of bolza type in discrete time, Unnamed Item, Unnamed Item, Semi-Markov decision processes with a reachable state-subset, Iterative algorithms for solving undiscounted bellman equations, Nonlinear Feynman-Kac formula and discrete-functional-type BSDEs with continuous coeffi\-cients, Optimal partially reversible investment with entry decision and general production function, Optimal asset--liability management with constraints: A dynamic programming approach, Solving ALM problems via sequential stochastic programming, WHEN ARE SWING OPTIONS BANG-BANG?, Finite-horizon optimality for continuous-time Markov decision processes with unbounded transition rates, Unnamed Item, On the existence of stationary optimal policies for partially observed MDPs under the long-run average cost criterion, Nonstationary discrete-time deterministic and stochastic control systems with infinite horizon, Separated Design of Encoder and Controller for Networked Linear Quadratic Optimal Control, Control with limited information, UTILITY MAXIMIZATION UNDER MODEL UNCERTAINTY IN DISCRETE TIME, Multiobjective Stopping Problem for Discrete-Time Markov Processes: Convex Analytic Approach, Reinsurance optimal strategy of a loss excess, A Backward Dual Representation for the Quantile Hedging of Bermudan Options, Constrained and Unconstrained Optimal Discounted Control of Piecewise Deterministic Markov Processes, One-dimensional wave equation with set-valued boundary damping: well-posedness, asymptotic stability, and decay rates, OPTIMAL INVESTMENT ON FINITE HORIZON WITH RANDOM DISCRETE ORDER FLOW IN ILLIQUID MARKETS, Strong Uniform Value in Gambling Houses and Partially Observable Markov Decision Processes, A Weak Dynamic Programming Principle for Combined Optimal Stopping/Stochastic Control with ${\cal E}^{f}$-expectations, Variance Regularization in Sequential Bayesian Optimization, Utility Maximization with Proportional Transaction Costs Under Model Uncertainty, Sequential selection of a monotone subsequence from a random permutation, Nonlinear Lévy processes and their characteristics, Optimal Sequential Selection of a Unimodal Subsequence of a Random Sequence, Infinite-horizon deterministic dynamic programming in discrete time: a monotone convergence principle and a penalty method, Absorbing Continuous-Time Markov Decision Processes with Total Cost Criteria, Online Selection of Alternating Subsequences from a Random Sample, Unnamed Item, Probabilistic representation of a class of non conservative nonlinear Partial Differential Equations, Dynamic programming for non-additive stochastic objectives, Stationary Plans need not be Uniformly Adequate for Leavable, Borel Gambling Problems, Unnamed Item, Robust shortest path planning and semicontractive dynamic programming, On the reduction of total‐cost and average‐cost MDPs to discounted MDPs, Adaptive Robust Control under Model Uncertainty, Partially Observable Semi-Markov Games with Discounted Payoff, Analytically measurable selection of epsilon optimal transition kernals, The Expected Total Cost Criterion for Markov Decision Processes under Constraints, Distributed asynchronous computation of fixed points, Preventive replacement for multi-parts systems, Stable Optimal Control and Semicontractive Dynamic Programming, A note on an optimal choice problem, Stochastic scheduling problems II-set strategies-, Unnamed Item, Optimal Control of a Partially Observable Failing System with Costly Multivariate Observations, Optimal sensor scheduling for hidden Markov model state estimation, Asymptotic optimality of tracking policies in stochastic networks., A model for investment decisions with switching costs., From weak to strong convergence in \(L_ 1\)-spaces via \(K\)-convergence, Existence of optimal stationary policies in discounted Markov decision processes: Approaches by occupation measures, Semicontinuous nonstationary stochastic games, On \(\epsilon\)-optimal continuous selectors and their application in discounted dynamic programming, Regularity properties in a state-constrained expected utility maximization problem, Applicable stochastic control: From theory to practice, Boundedly optimal control of piecewise deterministic systems, Canonical supermartingale couplings, A note on the Ross-Taylor theorem, Markov decision processes with a minimum-variance criterion, Computational aspects in applied stochastic control, Stochastic viscosity solutions for nonlinear stochastic partial differential equations. II., Robust expected utility maximization with medial limits, Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values, Active network management for electrical distribution systems: problem formulation, benchmark, and approximate solution, A two-player zero-sum game where only one player observes a Brownian motion, Cost-minimal immunization in the Greenwood epidemic model, Dynamic diagnostic and decision procedures under uncertainty, Approximation of average cost optimal policies for general Markov decision processes with unbounded costs, Probabilistic aspects of finite-fuel, reflected follower problems, Approximations for optimal stopping of a piecewise-deterministic process, A never-a-weak-best-response test in infinite signaling games, Backward SDEs for optimal control of partially observed path-dependent stochastic systems: A control randomization approach, Multiple-priors optimal investment in discrete time for unbounded utility function, On a strong form of propagation of chaos for McKean-Vlasov equations, Markov chains with a transition possibility measure and fuzzy dynamic programming, A potential of fuzzy relations with a linear structure: The unbounded case, Controlled semi-Markov models - the discounted case, Theory of dynamic portfolio for survival under uncertainty, Discrete dynamic programming and viscosity solutions of the Bellman equation, On compactness of the space of policies in stochastic dynamic programming, Nonparametric adaptive control of discrete-time partially observable stochastic systems, Impulse control of piecewise-deterministic processes, Discretization procedures for adaptive Markov control processes, Robustness inequality for Markov control processes with unbounded costs, Stochastic programs without duality gaps, A general decomposition approach for multi-criteria decision trees, Optimal control of infinite-dimensional piecewise deterministic Markov processes and application to the control of neuronal dynamics via optogenetics, A new look at optimal growth under uncertainty, The risk probability criterion for discounted continuous-time Markov decision processes, Generalized stochastic target problems for pricing and partial hedging under loss constraints -- application in optimal book liquidation, The transformation method for continuous-time Markov decision processes, Regularity properties of constrained set-valued mappings, Dual formulation of second order target problems, Total reward semi-Markov mean-field games with complementarity properties, Randomized filtering and Bellman equation in Wasserstein space for partial observation control problem, Two characterizations of optimality in dynamic programming, Anwendungen des Maximumprinzips im Operations Research. I, A unified framework for stochastic optimization, A method for high-dimensional smoothing, Robust topological policy iteration for infinite horizon bounded Markov decision processes, Average cost Markov decision processes under the hypothesis of Doeblin, Functional characterization for average cost Markov decision processes with Doeblin's conditions, Hitting times in Markov chains with restart and their application to network centrality, Monte-Carlo algorithms for a forward Feynman-Kac-type representation for semilinear nonconservative partial differential equations, Stochastic control for a class of nonlinear kernels and applications, Quantile hedging in a semi-static market with model uncertainty, On strong average optimality of Markov decision processes with unbounded costs, Economic design of memory-type control charts: the fallacy of the formula proposed by Lorenzen and Vance (1986), Deep reinforcement learning with temporal logics, A linear-quadratic Gaussian approach to dynamic information acquisition, Linear programming approach to optimal impulse control problems with functional constraints, Some structured dynamic programs arising in economics, Stochastic invariance of closed sets with non-Lipschitz coefficients, Nonconcave robust optimization with discrete strategies under Knightian uncertainty, Epi-convergent discretizations of stochastic programs via integration quadratures, Dynamic importance sampling for uniformly recurrent Markov chains, Estimation of random information sets of multistep systems, Nonzero-sum games for continuous-time jump processes under the expected average payoff criterion, Lagrangian approximations for stochastic reachability of a target tube, Markov decision processes with quasi-hyperbolic discounting, A new weak solution to an optimal stopping problem, Tutorial on risk neutral, distributionally robust and risk averse multistage stochastic programming, Time (in)consistency of multistage distributionally robust inventory models with moment constraints, Stackelberg equilibrium in a dynamic stimulation model with complete information, Stochastic output-feedback model predictive control, On the nonexplosion and explosion for nonhomogeneous Markov pure jump processes, The complexity of dynamic programming, On piecewise deterministic Markov control processes: Control of jumps and of risk processes in insurance, A stochastic dynamic programming model for scheduling of offshore petroleum fields with resource uncertainty, Martingales with given maxima and terminal distributions, Remarks on the existence of solutions to the average cost optimality equation in Markov decision processes, Sensitivity analysis of multisector optimal economic dynamics, Sequential identification and adaptive control in stochastic systems, Multiple objective nonatomic Markov decision processes with total reward criteria, On the two-armed bandit problem with non-observed Poissonian switching of arms., Constructing prior distributions with trees of exchangeable processes, The value iteration method for countable state Markov decision processes, Constrained Markovian decision processes: The dynamic programming approach, Nonatomic total rewards Markov decision processes with multiple criteria, Fuzzy decision processes with an average reward criterion., A limit theorem in some dynamic fuzzy systems, A potential of fuzzy relations with a linear structure: The contractive case, Another look at the Radner--Stiglitz nonconcavity in the value of information., Optimal cost and policy for a Markovian replacement problem, Stationary policies and Markov policies in Borel dynamic programming, Revised simplex algorithm for finite Markov decision processes, Minimax selection theorems, Fine properties of the optimal Skorokhod embedding problem, State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings, Constrained discounted stochastic games, McKean-Vlasov optimal control: the dynamic programming principle, An evolutionary finance model with short selling and endogenous asset supply, A note on risk-sensitive control of invariant models, Discounted stochastic games for continuous-time jump processes with an uncountable state space, Risk-sensitive discounted cost criterion for continuous-time Markov decision processes on a general state space, Continuous-time zero-sum games for Markov decision processes with discounted risk-sensitive cost criterion, Optimal feedback control of stock prices under credit risk dynamics, A consumption and investment problem via a Markov decision processes approach with random horizon, First passage risk probability minimization for piecewise deterministic Markov decision processes, Sequential systems of reflected backward stochastic differential equations with application to impulse control, Stochastic filtering of a pure jump process with predictable jumps and path-dependent local characteristics, Availability maximization under partial observations, Optimal synchronization problem for a multi-agent system, A two-step problem of hedging a European call option under a random duration of transactions, Zero-sum continuous-time Markov pure jump game over a fixed duration, Stationary Markov perfect equilibria in discounted stochastic games, Kantorovich duality for general transport costs and applications, Equilibria in altruistic economic growth models, Discrete-time ergodic mean-field games with average reward on compact spaces, Uniqueness of equilibrium in a Bewley-Aiyagari model, Risk-sensitive continuous-time Markov decision processes with unbounded rates and Borel spaces, Probabilistic interpretation for solutions of fully nonlinear stochastic pdes, Reduction of total-cost and average-cost MDPs with weakly continuous transition probabilities to discounted mdps, On risk-sensitive piecewise deterministic Markov decision processes, On dynamic programming principle for stochastic control under expectation constraints, Irreducible convex paving for decomposition of multidimensional martingale transport plans, Quenched mass transport of particles toward a target, Parameter-dependent stochastic optimal control in finite discrete time, On the expected total reward with unbounded returns for Markov decision processes, A level-set approach for stochastic optimal control problems under controlled-loss constraints, Consistency of the maximum likelihood estimator for general hidden Markov models, On discounted dynamic programming with unbounded returns, Second order backward SDE with random terminal time, Partially observed nonlinear risk-sensitive optimal stopping control for nonlinear discrete-time systems, All adapted topologies are equal, Empirical risk minimization and complexity of dynamical models, \(\mathcal{L}_1 \)-optimal filtering of Markov jump processes. I: Exact solution and numerical implementation schemes, Optimal control of a discrete-time stochastic system with a probabilistic criterion and a non-fixed terminal time, Optimal control of infinite-dimensional piecewise deterministic Markov processes: a BSDE approach. Application to the control of an excitable cell membrane, Stochastic dynamic programming with non-linear discounting, Elementary results on solutions to the Bellman equation of dynamic programming: existence, uniqueness, and convergence, Zero-sum stochastic games with partial information and average payoff, Robust Markov perfect equilibria, Determining the optimal strategies for discrete control problems on stochastic networks with discounted costs, Stochastic optimal control of risk processes with Lipschitz payoff functions, On measurable minimax selectors, Zero-sum stochastic differential games of generalized McKean-Vlasov type, Average cost optimal policies for Markov control processes with Borel state space and unbounded costs, Turnpike properties for a class of piecewise deterministic systems arising in manufacturing flow control, Weak Feller property of non-linear filters, Risk-sensitive average equilibria for discrete-time stochastic games, The effect of multi-sensor data on condition-based maintenance policies, On the quasi-sure superhedging duality with frictions, Law invariant risk measures and information divergences, A guaranteed deterministic approach to superhedging: financial market model, trading constraints, and the Bellman-Isaacs equations, Constrained discounted Markov decision processes with Borel state spaces, On stochastic linear systems with zonotopic support sets, Conditional nonlinear expectations, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Constrained expected average stochastic games for continuous-time jump processes, A framework for the dynamic programming principle and martingale-generated control correspondences, Transport plans with domain constraints, Recursive equilibrium in Krusell and Smith (1998), Good deal hedging and valuation under combined uncertainty about drift and volatility, A unified framework for robust modelling of financial markets in discrete time, Multiperiod martingale transport, Nash equilibrium in a special case of symmetric resource extraction games, Central limit theorem and sample complexity of stationary stochastic programs, Distributionally robust optimal control and MDP modeling, Perov's contraction principle and dynamic programming with stochastic discounting, Optimal switching problems with an infinite set of modes: an approach by randomization and constrained backward SDEs, A Fenchel-Moreau-Rockafellar type theorem on the Kantorovich-Wasserstein space with applications in partially observable Markov decision processes, Nonzero-sum risk-sensitive average stochastic games: The case of unbounded costs, Duality in optimal impulse control, On structural properties of optimal average cost functions in Markov decision processes with Borel spaces and universally measurable policies, Controllable Markov jump processes. I: Optimum filtering based on complex observations, Safe-visor architecture for sandboxing (AI-based) unverified controllers in stochastic cyber-physical systems, Piecewise deterministic Markov processes and their invariant measures, Production/inventory competition between firms with fixed-proportions co-production systems, A note on topological aspects in dynamic games of resource extraction and economic growth theory, Distributionally robust modeling of optimal control, Reduced-form framework under model uncertainty, Existence, duality, and cyclical monotonicity for weak transport costs, From reinforcement learning to optimal control: a unified framework for sequential decisions, Stochastic approximations of constrained discounted Markov decision processes, Kolmogorov's equations for jump Markov processes with unbounded jump rates, Consistency of maximum likelihood estimation for some dynamical systems, Joint pricing and inventory control for additive demand models with reference effects, Arbitrage and duality in nondominated discrete-time models, Automata-based controller synthesis for stochastic systems: a game framework via approximate probabilistic relations, Existence of stationary Markov perfect equilibria in stochastic altruistic growth economies, McKean Feynman-Kac probabilistic representations of non-linear partial differential equations, Averaging and linear programming in some singularly perturbed problems of optimal control, Automatic model training under restrictive time constraints, Optimal execution with stochastic delay, Randomized and backward SDE representation for optimal control of non-Markovian SDEs, A stability result for linear Markovian stochastic optimization problems, Optimal Control of Partially Observable Semi-Markovian Failing Systems: An Analysis Using a Phase Methodology, Kolmogorov's Equations for Jump Markov Processes and Their Applications to Control Problems, An Algorithm to Construct Subsolutions of Convex Optimal Control Problems, Risk-Sensitive Markov Decision Problems under Model Uncertainty: Finite Time Horizon Case, Averaged time-optimal control problem in the space of positive Borel measures, Fenchel-Moreau Conjugation Inequalities with Three Couplings and Application to Stochastic Bellman Equation, PORTFOLIO OPTIMIZATION UNDER A QUANTILE HEDGING CONSTRAINT, Markov--Nash Equilibria in Mean-Field Games with Discounted Cost, Utilisation de la programmation dynamique dans la modélisation de la pêcherie de la sardine au Maroc, Robust Utility Maximization in Discrete-Time Markets with Friction, Optimal Control of Continuous-Time Markov Chains with Noise-Free Observation, What is the value of the cross-sectional approach to deep reinforcement learning?, Monotonicity and bounds for convex stochastic control models, Short Communication: Super-Replication Prices with Multiple Priors in Discrete Time, Exact Solutions and Approximations for Optimal Investment Strategies and Indifference Prices, Simple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian Noise and Policy Evaluation in Reinforcement Learning, From Infinite to Finite Programs: Explicit Error Bounds with Applications to Approximate Dynamic Programming, Sufficiency of Markov Policies for Continuous-Time Jump Markov Decision Processes, On Linear Programming for Constrained and Unconstrained Average-Cost Markov Decision Processes with Countable Action Spaces and Strictly Unbounded Costs, Optimal Transport-Based Distributionally Robust Optimization: Structural Properties and Iterative Schemes, Forward Feynman-Kac type representation for semilinear non-conservative partial differential equations, Gradual-Impulsive Control for Continuous-Time Markov Decision Processes with Total Undiscounted Costs and Constraints: Linear Programming Approach via a Reduction Method, Filtering method for linear and non-linear stochastic optimal control of partially observable systems II, Algorithmic Trading, Stochastic Control, and Mutually Exciting Processes, Unnamed Item, The Stochastic Auxiliary Problem Principle in Banach Spaces: Measurability and Convergence, Markov Decision Processes with Incomplete Information and Semiuniform Feller Transition Probabilities, Verification of General Markov Decision Processes by Approximate Similarity Relations and Policy Refinement, Regular Policies in Abstract Dynamic Programming, Technical Note—A Permutation-Dependent Separability Approach for Capacitated Two-Echelon Inventory Systems, On the time discretization of stochastic optimal control problems: The dynamic programming approach, Continuous-Time Markov Decision Processes with Exponential Utility, Approximate Nash Equilibria in Partially Observed Stochastic Games with Mean-Field Interactions, Unique Tarski Fixed Points, Stochastic filtering and optimal control of pure jump Markov processes with noise-free partial observation, Robustness to Incorrect System Models in Stochastic Control, Finite-stage stochastic decision processes with recursive reward structure I: optimality equations and deterministic strategies, Randomized dynamic programming principle and Feynman-Kac representation for optimal control of McKean-Vlasov dynamics, Uniformly Bounded Regret in the Multisecretary Problem, Causal Transport in Discrete Time and Applications, Ordinary Differential Equation Methods for Markov Decision Processes and Application to Kullback--Leibler Control Cost, Periodical Multistage Stochastic Programs, Realizable Strategies in Continuous-Time Markov Decision Processes, Optimal Control Under Uncertainty and Bayesian Parameters Adjustments, Relaxation and Purification for Nonconvex Variational Problems in Dual Banach Spaces: The Minimization Principle in Saturated Measure Spaces, Super-replication price: it can be ok, Easy Affine Markov Decision Processes, One Problem of Statistically Uncertain Estimation, A Universal Dynamic Program and Refined Existence Results for Decentralized Stochastic Control, Average Cost Optimality Inequality for Markov Decision Processes with Borel Spaces and Universally Measurable Policies, Optimal Control of Partially Observable Piecewise Deterministic Markov Processes, INTEGRATED DECISION ON PRICING, PROMOTION AND INVENTORY MANAGEMENT, Average Cost Markov Decision Processes with Semi-Uniform Feller Transition Probabilities, On an Approach to Evaluation of Health Care Programme by Markov Decision Model, Sufficiency of Deterministic Policies for Atomless Discounted and Uniformly Absorbing MDPs with Multiple Criteria, Structures and methods of dynamical decision-making, Adaptive Robust Control in Continuous Time, OPTIMAL REPLACEMENT POLICIES UNDER ENVIRONMENT-DRIVEN DEGRADATION, Quadratic approximate dynamic programming for input‐affine systems, Model-Free Price Bounds Under Dynamic Option Trading, Discrete-Time Semi-Markov Random Evolutions and their Applications, Non-Stationary Semi-Markov Decision Processes on a Finite Horizon, Probabilistic Model Checking of Labelled Markov Processes via Finite Approximate Bisimulations, Occupation measures in average cost Markov decision processes, On a Discounted Inventory Game, Extended Laplace principle for empirical measures of a Markov chain, Discrete-type approximations for non-Markovian optimal stopping problems: Part I, Discrete Dividend Payments in Continuous Time, An approximation scheme for the optimal control of diffusion processes, Viscosity Solutions of Path-Dependent PDEs with Randomized Time, On Reducing a Constrained Gradual-Impulsive Control Problem for a Jump Markov Model to a Model with Gradual Control Only, The Robust Superreplication Problem: A Dynamic Approach, Regression Monte Carlo for microgrid management, Quantifying Distributional Model Risk via Optimal Transport, On the Minimum Pair Approach for Average Cost Markov Decision Processes with Countable Discrete Action Spaces and Strictly Unbounded Costs, Risk-Sensitive Discounted Continuous-Time Markov Decision Processes with Unbounded Rates, Compactness criterion for semimartingale laws and semimartingale optimal transport, Average Reward Markov Decision Processes with Multiple Cost Constraints, Robustness to Incorrect Priors in Partially Observed Stochastic Control, Optimal Impulse Control of Dynamical Systems, Constrained Markov Decision Processes with Expected Total Reward Criteria, On zero-sum two-person undiscounted semi-Markov games with a multichain structure, Unnamed Item, TIME-INCONSISTENT MARKOVIAN CONTROL PROBLEMS UNDER MODEL UNCERTAINTY WITH APPLICATION TO THE MEAN-VARIANCE PORTFOLIO SELECTION, Dual Representation of the Cost of Designing a Portfolio Satisfying Multiple Risk Constraints, Nowak's Theorem on Probability Measures Induced by Strategies Revisited, Preselective strategies for the optimization of stochastic project networks under resource constraints, Algorithmic approaches to preselective strategies for stochastic scheduling problems, Impulsive Control for Continuous-Time Markov Decision Processes, Stochastic Comparative Statics in Markov Decision Processes, Average optimality for continuous-time Markov decision processes under weak continuity conditions, A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation, On Hedging American Options under Model Uncertainty, On the correctness of monadic backward induction, Unnamed Item, On gradual-impulse control of continuous-time Markov decision processes with exponential utility, Filtering method for linear and non-linear stochastic optimal control of partially observable systems, Constrained markov decision processes with compact state and action spaces: the average case, Strategic measures in optimal control problems for stochastic sequences, Mean Field Equilibrium: Uniqueness, Existence, and Comparative Statics