Contraction Mappings in the Theory Underlying Dynamic Programming
From MaRDI portal
Publication:5535549
Cited in
(only showing first 100 items - show all)- A set of successive approximation methods for discounted Markovian decision problems
- Partially observable Markov decision model for the treatment of early prostate cancer
- An abstract topological approach to dynamic programming
- Inventory control of service parts in the final phase
- The Repair VS. Replacement problem: A stochastic control approach
- Fuzzy approach to multilevel knapsack problems
- Semi-Markov information model for revenue management and dynamic pricing
- The shortest path problem with two objective functions
- Finite-state approximations for denumerable-state infinite-horizon discounted Markov decision processes
- Optimality of the fastest available server policy
- A Fixed Point Approach to Undiscounted Markov Renewal Programs
- Transformation of partially observable Markov decision processes into piecewise linear ones
- Reducing the number of multiplications in iterative processes
- Controlled semi-Markov models - the discounted case
- Multi-period production control in a centralized fully flexible manufacturing system
- Some structured dynamic programs arising in economics
- Discrete convexity: Convexity for functions defined on discrete spaces
- On constrained Markov decision processes
- Discounted Stochastic Ratio Games
- Finite state approximations for denumerable state infinite horizon discounted Markov decision processes with unbounded rewards
- Sequential Stackelberg equilibria in two-person games
- Applications of fixed-point methods to discrete variational and quasi- variational inequalities
- Dynamic programming and the Lagrange multipliers
- On the estimation of the unknown sample size from the number of records
- Four Canadian Contributions to Stochastic Modeling
- Finite state approximation algorithms for average cost denumerable state Markov decision processes
- An efficient algorithm for the dynamic economic lot size problem
- Policy iteration and Newton-Raphson methods for Markov decision processes under average cost criterion
- Discretizing dynamic programs
- Contingent planning under uncertainty via stochastic satisfiability
- Approximate policy iteration: a survey and some new methods
- The effect on optimal consumption on increased uncertainty in labor income in the multiperiod case
- Heuristic Assignments of Redundant Software Versions and Processors in Fault-tolerant Computer Systems for Maximum Reliability
- (Approximate) iterated successive approximations algorithm for sequential decision processes
- Discounting axioms imply risk neutrality
- Designing an optimal production system with inspection
- Optimal pricing of a product with periodic enhancements
- System planning and configuration problems for optimal system design
- A generalized theorem of the maximum
- Asymptotic expansions for dynamic programming recursions with general nonnegative matrices
- Markov decision processes
- Optimal threshold probability in undiscounted Markov decision processes with a target set.
- A new characterization for the dynamic lot size problem with bounded inventory
- Nonstationary Markov decision problems with converging parameters
- Optimal location of dwell points in a single loop AGV system with time restrictions on vehicle availability
- Conjugate duality and the curse of dimensionality
- A dynamic game of reputation and economic performances in nondemocratic regimes
- Iterative Bounds on the Equilibrium Distribution of a Finite Markov Chain
- Theory and applications of generalized dynamic programming: An overview
- Fixed point theorems for discounted finite Markov decision processes
- On a nonseparable convex maximization problem with continuous Knapsack constraints
- Contraction mappings underlying undiscounted Markov decision problems
- Identification of discrete choice dynamic programming models with nonparametric distribution of unobservables
- Approximation of two-person zero-sum continuous-time Markov games with average payoff criterion
- Monotonicity and the principle of optimality
- Zur Extrapolation in Markoffschen Entscheidungsmodellen mit Diskontierung
- On a set of optimal policies in continuous time Markovian decision problem
- An elimination condition to check the validity of the principle of optimality
- Truncated policy iteration methods
- Probabilistic models for optimizing patients survival rates
- Partial termination rule of Lagrangian relaxation for manufacturing cell formation problems
- A zero-sum stochastic game model of duopoly
- Contraction mappings underlying undiscounted Markov decision problems. II
- Heuristics for determining economic processing rates in a flexible manufacturing system
- Partially observable Markov decision processes and periodic policies with applications
- A method of bisection for discounted Markov decision problems
- Composing batches with yield uncertainty
- Solving Markovian decision processes by successive elimination of variables
- On efficiency of linear programming applied to discounted Markovian decision problems
- Multigrid methods for two-player zero-sum stochastic games.
- Piecewise affine approximations for the control of a one-reservoir hydroelectric system
- Finite-state approximations to denumerable-state dynamic programs
- Robust shortest path planning and semicontractive dynamic programming
- Generalized dynamic programming for multicriteria optimization
- On a language for discrete dynamic programming and a microcomputer implementation
- Regular policies in abstract dynamic programming
- Classes of discrete optimization problems and their decision problems
- A multi-period TSP with stochastic regular and urgent demands
- Using geometric techniques to improve dynamic programming algorithms for the economic lot-sizing problem and extensions
- On Markov policies for minimax decision processes
- Boundedly optimal control of piecewise deterministic systems
- A model of project evaluation with limited attention
- Turnpike properties for a class of piecewise deterministic systems arising in manufacturing flow control
- Data-driven optimal control with a relaxed linear program
- Capacity expansion for a loss system with exponential demand growth.
- Transient policies in discrete dynamic programming: Linear programming including suboptimality tests and additional constraints
- A multi-objective version of Bellman's inventory problem
- On Bellman's principle with inequality constraints
- A polynomial-time algorithm for computing an optimal admission policy in a GI/M/1/N queue
- Block-successive approximation for a discounted Markov decision model
- On the reduction of total-cost and average-cost MDPs to discounted mdps
- Bounds on the fixed point of a monotone contraction operator
- Adaptive age replacement
- Solvable classes of discrete dynamic programming
- Optimality in transient markov chains and linear programming
- A priori bounds for approximations of Markov programs
- The bellman equation for vector-valued semi-markovian dyanmic programiing
- A structured pattern matrix algorithm for multichain Markov decision processes
- MARKOV DECISION PROCESSES
- Solution of a Markovian decision problem by successive overrelaxation
This page was built for publication: Contraction Mappings in the Theory Underlying Dynamic Programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5535549)