scientific article; zbMATH DE number 3320878
From MaRDI portal
Publication:5599448
zbMATH Open0202.18401MaRDI QIDQ5599448FDOQ5599448
Authors: K. Hinderer
Publication date: 1970
Title of this publication is not available (Why is that?)
Cited In (only showing first 100 items - show all)
- Constrained denumerable state non-stationary MDPs with expected total reward criterion
- A dual approach to Bayesian inference and adaptive control
- Minimax control for discrete-time time-varying stochastic systems
- A dynamic multi-item two-activity problem
- Necessary and sufficient conditions for a bounded solution to the optimality equation in average reward Markov decision chains
- Measurable selection theorems for optimization problems
- Equivalence of Lyapunov stability criteria in a class of Markov decision processes
- Existence of optimal stationary policies in average reward Markov decision processes with a recurrent state
- Markov-Nash equilibria in mean-field games with discounted cost
- Optimal research and development expenditures under an incremental tax incentive scheme
- Stochastic scheduling problems I — General strategies
- Continuous-time Markov decision processes with state-dependent discount factors
- Dynamic risk measures under model uncertainty
- On Markov policies for minimax decision processes
- Markov control processes with randomized discounted cost
- A fuzzy approach to Markov decision processes with uncertain transition probabilities
- Discounted Cost Markov Decision Processes with a Constraint
- Partially observable total-cost Markov decision processes with weakly continuous transition probabilities
- Kleisli morphisms and randomized congruences for the Giry monad
- Characterizations of optimal policies in a general stopping problem and stability estimating
- Markov renewal decision processes with finite horizon
- Estimates of stability of Markov control processes with unbounded costs.
- Estimates for perturbations of average Markov decision processes with a minimal state and upper bounded by stochastically ordered Markov chains.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Zero-sum risk-sensitive stochastic games
- Markov decision processes under ambiguity
- Conditions for the solvability of the linear programming formulation for constrained discounted Markov decision processes
- The recursive approach to time inconsistency
- Optimal investment and consumption with stochastic dividends
- Risk measurement and risk-averse control of partially observable discrete-time Markov systems
- On Nash equilibrium solutions in nonzero-sum stochastic games with complete information
- Conditions for characterizing the structure of optimal strategies in infinite-horizon dynamic programs
- Denumerable controlled Markov chains with average reward criterion: Sample path optimality
- Recent results on conditions for the existence of average optimal stationary policies
- Markov decision processes on Borel spaces with total cost and random horizon
- Markov decision processes associated with two threshold probability criteria
- Optimal inventory policies when the demand distribution is not known
- The transformation method for continuous-time Markov decision processes
- Optimal strategies for an inventory system with cost functions of general form
- Markov decision processes with iterated coherent risk measures
- A mathematical framework for learning and adaption: (Generalized) random systems with complete connections
- Optimal replacement under additive damage in randomly varying environments
- On variable discounting in dynamic programming: applications to resource extraction and other economic models
- On compactness of the space of policies in stochastic dynamic programming
- Evolution and market behavior
- On dynamic programming: Compactness of the space of policies
- Adaptive policy-iteration and policy-value-iteration for discounted Markov decision processes
- Stochastic dynamic programming with non-linear discounting
- Credibilistic Markov decision processes: The average case
- Stochastic control theory and operational research
- Value iteration in average cost Markov control processes on Borel spaces
- Approximation Theorems for Zero-Sum Nonstationary Stochastic Games
- Optimal policies for constrained average-cost Markov decision processes
- Finite-state approximations for denumerable multidimensional state discounted Markov decision processes
- Convergence of probability measures and Markov decision models with incomplete information
- Adaptive control for discrete-time Markov processes with unbounded costs: Discounted criterion.
- Average cost optimal policies for Markov control processes with Borel state space and unbounded costs
- Nonstationary value-iteration and adaptive control of discounted semi- Markov processes
- A note on negative dynamic programming for risk-sensitive control
- Adaptive control of discounted Markov decision chains
- Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal
- On discounted dynamic programming with constraints
- Randomization and simplification in dynamic decision-making.
- A selection theorem for optimization problems
- Semicontinuous nonstationary stochastic games. II
- Recurrence conditions for Markov decision processes with Borel state space: A survey
- A stochastic interpretation of game logic
- Stochastic scheduling problems II-set strategies-
- Arbitrary state semi-Markov decision processes
- Existence of optimal policy for time non-homogeneous discounted Markovian decision programming
- On essential information in sequential decision processes
- An analysis of transient Markov decision processes
- Controlled jump processes
- Approximation of average cost optimal policies for general Markov decision processes with unbounded costs
- The Bellman's principle of optimality in the discounted dynamic programming
- Nonparametric adaptive control of discrete-time partially observable stochastic systems
- Semicontinuous nonstationary stochastic games
- Fixed point theorems for discounted finite Markov decision processes
- Monotonicity and the principle of optimality
- Estimation and control in discounted stochastic dynamic programming
- Markov decision processes with state-dependent discount factors and unbounded rewards/costs
- Utility, probabilistic constraints, mean and variance of discounted rewards in Markov decision processes
- Sufficient conditions for optimality of a \((z,c^ -,c^ +)\)-sampling plan in multistage Bayesian acceptance sampling
- A unified approach to adaptive control of average reward Markov decision processes
- Continuous dependence of stochastic control models on the noise distribution
- Bounds for the approximation of dynamic programs
- Preventive replacement for multi-parts systems
- Some comments on preference order dynamic programming models
- A note on the convergence rate of the value iteration scheme in controlled Markov chains
- A pause control approach to the value iteration scheme in average Markov decision processes
- \(C^3\) modeling with symmetrical rationality
- On the convergence of successive approximations in dynamic programming with non-zero terminal reward
- Robustness inequality for Markov control processes with unbounded costs
- Estimates for finite-stage dynamic programs
- Approximations of inventory models
- A remark on the connections between coding and dynamic programming
- On a Continuously Discounted Vector Valued Markov Decision Process
- Markov control models with unknown random state-action-dependent discount factors
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5599448)